00:00:00.001  Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 229
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3730
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.017  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.017  The recommended git tool is: git
00:00:00.017  using credential 00000000-0000-0000-0000-000000000002
00:00:00.019   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.039  Fetching changes from the remote Git repository
00:00:00.041   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.065  Using shallow fetch with depth 1
00:00:00.065  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.065   > git --version # timeout=10
00:00:00.111   > git --version # 'git version 2.39.2'
00:00:00.111  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.167  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.167   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:03.069   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:03.081   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:03.092  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:03.092   > git config core.sparsecheckout # timeout=10
00:00:03.102   > git read-tree -mu HEAD # timeout=10
00:00:03.118   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:03.136  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:03.136   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:03.220  [Pipeline] Start of Pipeline
00:00:03.234  [Pipeline] library
00:00:03.236  Loading library shm_lib@master
00:00:03.236  Library shm_lib@master is cached. Copying from home.
00:00:03.253  [Pipeline] node
00:00:03.265  Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2
00:00:03.267  [Pipeline] {
00:00:03.277  [Pipeline] catchError
00:00:03.279  [Pipeline] {
00:00:03.291  [Pipeline] wrap
00:00:03.300  [Pipeline] {
00:00:03.310  [Pipeline] stage
00:00:03.311  [Pipeline] { (Prologue)
00:00:03.330  [Pipeline] echo
00:00:03.332  Node: VM-host-WFP7
00:00:03.340  [Pipeline] cleanWs
00:00:03.351  [WS-CLEANUP] Deleting project workspace...
00:00:03.351  [WS-CLEANUP] Deferred wipeout is used...
00:00:03.358  [WS-CLEANUP] done
00:00:03.558  [Pipeline] setCustomBuildProperty
00:00:03.630  [Pipeline] httpRequest
00:00:03.946  [Pipeline] echo
00:00:03.947  Sorcerer 10.211.164.20 is alive
00:00:03.956  [Pipeline] retry
00:00:03.959  [Pipeline] {
00:00:03.971  [Pipeline] httpRequest
00:00:03.975  HttpMethod: GET
00:00:03.976  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:03.976  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:03.978  Response Code: HTTP/1.1 200 OK
00:00:03.978  Success: Status code 200 is in the accepted range: 200,404
00:00:03.978  Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:04.125  [Pipeline] }
00:00:04.140  [Pipeline] // retry
00:00:04.145  [Pipeline] sh
00:00:04.422  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:04.435  [Pipeline] httpRequest
00:00:05.003  [Pipeline] echo
00:00:05.004  Sorcerer 10.211.164.20 is alive
00:00:05.012  [Pipeline] retry
00:00:05.014  [Pipeline] {
00:00:05.024  [Pipeline] httpRequest
00:00:05.028  HttpMethod: GET
00:00:05.029  URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz
00:00:05.029  Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz
00:00:05.030  Response Code: HTTP/1.1 200 OK
00:00:05.031  Success: Status code 200 is in the accepted range: 200,404
00:00:05.031  Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz
00:00:25.320  [Pipeline] }
00:00:25.338  [Pipeline] // retry
00:00:25.345  [Pipeline] sh
00:00:25.631  + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz
00:00:28.185  [Pipeline] sh
00:00:28.467  + git -C spdk log --oneline -n5
00:00:28.467  b18e1bd62 version: v24.09.1-pre
00:00:28.467  19524ad45 version: v24.09
00:00:28.467  9756b40a3 dpdk: update submodule to include alarm_cancel fix
00:00:28.467  a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810
00:00:28.467  3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys
00:00:28.486  [Pipeline] withCredentials
00:00:28.498   > git --version # timeout=10
00:00:28.511   > git --version # 'git version 2.39.2'
00:00:28.528  Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS
00:00:28.530  [Pipeline] {
00:00:28.539  [Pipeline] retry
00:00:28.541  [Pipeline] {
00:00:28.555  [Pipeline] sh
00:00:28.839  + git ls-remote http://dpdk.org/git/dpdk-stable v23.11
00:00:29.110  [Pipeline] }
00:00:29.127  [Pipeline] // retry
00:00:29.131  [Pipeline] }
00:00:29.146  [Pipeline] // withCredentials
00:00:29.154  [Pipeline] httpRequest
00:00:29.971  [Pipeline] echo
00:00:29.972  Sorcerer 10.211.164.20 is alive
00:00:29.981  [Pipeline] retry
00:00:29.983  [Pipeline] {
00:00:29.997  [Pipeline] httpRequest
00:00:30.001  HttpMethod: GET
00:00:30.002  URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:00:30.002  Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:00:30.011  Response Code: HTTP/1.1 200 OK
00:00:30.012  Success: Status code 200 is in the accepted range: 200,404
00:00:30.013  Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:02:18.471  [Pipeline] }
00:02:18.504  [Pipeline] // retry
00:02:18.508  [Pipeline] sh
00:02:18.784  + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:02:20.178  [Pipeline] sh
00:02:20.462  + git -C dpdk log --oneline -n5
00:02:20.462  eeb0605f11 version: 23.11.0
00:02:20.462  238778122a doc: update release notes for 23.11
00:02:20.462  46aa6b3cfc doc: fix description of RSS features
00:02:20.462  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:02:20.462  7e421ae345 devtools: support skipping forbid rule check
00:02:20.480  [Pipeline] writeFile
00:02:20.496  [Pipeline] sh
00:02:20.781  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:02:20.793  [Pipeline] sh
00:02:21.134  + cat autorun-spdk.conf
00:02:21.134  SPDK_RUN_FUNCTIONAL_TEST=1
00:02:21.134  SPDK_RUN_ASAN=1
00:02:21.135  SPDK_RUN_UBSAN=1
00:02:21.135  SPDK_TEST_RAID=1
00:02:21.135  SPDK_TEST_NATIVE_DPDK=v23.11
00:02:21.135  SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:21.135  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:21.142  RUN_NIGHTLY=1
00:02:21.144  [Pipeline] }
00:02:21.158  [Pipeline] // stage
00:02:21.172  [Pipeline] stage
00:02:21.174  [Pipeline] { (Run VM)
00:02:21.186  [Pipeline] sh
00:02:21.470  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:02:21.471  + echo 'Start stage prepare_nvme.sh'
00:02:21.471  Start stage prepare_nvme.sh
00:02:21.471  + [[ -n 2 ]]
00:02:21.471  + disk_prefix=ex2
00:02:21.471  + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]]
00:02:21.471  + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]]
00:02:21.471  + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf
00:02:21.471  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:21.471  ++ SPDK_RUN_ASAN=1
00:02:21.471  ++ SPDK_RUN_UBSAN=1
00:02:21.471  ++ SPDK_TEST_RAID=1
00:02:21.471  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:02:21.471  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:21.471  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:21.471  ++ RUN_NIGHTLY=1
00:02:21.471  + cd /var/jenkins/workspace/raid-vg-autotest_2
00:02:21.471  + nvme_files=()
00:02:21.471  + declare -A nvme_files
00:02:21.471  + backend_dir=/var/lib/libvirt/images/backends
00:02:21.471  + nvme_files['nvme.img']=5G
00:02:21.471  + nvme_files['nvme-cmb.img']=5G
00:02:21.471  + nvme_files['nvme-multi0.img']=4G
00:02:21.471  + nvme_files['nvme-multi1.img']=4G
00:02:21.471  + nvme_files['nvme-multi2.img']=4G
00:02:21.471  + nvme_files['nvme-openstack.img']=8G
00:02:21.471  + nvme_files['nvme-zns.img']=5G
00:02:21.471  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:02:21.471  + ((  SPDK_TEST_FTL == 1  ))
00:02:21.471  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:02:21.471  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:02:21.471  + for nvme in "${!nvme_files[@]}"
00:02:21.471  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G
00:02:21.471  Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:02:21.471  + for nvme in "${!nvme_files[@]}"
00:02:21.471  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G
00:02:21.471  Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:02:21.471  + for nvme in "${!nvme_files[@]}"
00:02:21.471  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G
00:02:21.471  Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:02:21.471  + for nvme in "${!nvme_files[@]}"
00:02:21.471  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G
00:02:21.471  Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:02:21.471  + for nvme in "${!nvme_files[@]}"
00:02:21.471  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G
00:02:21.471  Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:02:21.471  + for nvme in "${!nvme_files[@]}"
00:02:21.471  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G
00:02:21.471  Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:02:21.471  + for nvme in "${!nvme_files[@]}"
00:02:21.471  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G
00:02:21.731  Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:02:21.731  ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu
00:02:21.731  + echo 'End stage prepare_nvme.sh'
00:02:21.731  End stage prepare_nvme.sh
00:02:21.743  [Pipeline] sh
00:02:22.027  + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:02:22.027  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39
00:02:22.027  
00:02:22.027  DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant
00:02:22.027  SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk
00:02:22.027  VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2
00:02:22.027  HELP=0
00:02:22.027  DRY_RUN=0
00:02:22.027  NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,
00:02:22.027  NVME_DISKS_TYPE=nvme,nvme,
00:02:22.027  NVME_AUTO_CREATE=0
00:02:22.027  NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,
00:02:22.027  NVME_CMB=,,
00:02:22.027  NVME_PMR=,,
00:02:22.027  NVME_ZNS=,,
00:02:22.027  NVME_MS=,,
00:02:22.027  NVME_FDP=,,
00:02:22.027  SPDK_VAGRANT_DISTRO=fedora39
00:02:22.027  SPDK_VAGRANT_VMCPU=10
00:02:22.027  SPDK_VAGRANT_VMRAM=12288
00:02:22.027  SPDK_VAGRANT_PROVIDER=libvirt
00:02:22.027  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:02:22.027  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:02:22.027  SPDK_OPENSTACK_NETWORK=0
00:02:22.027  VAGRANT_PACKAGE_BOX=0
00:02:22.027  VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile
00:02:22.027  FORCE_DISTRO=true
00:02:22.027  VAGRANT_BOX_VERSION=
00:02:22.027  EXTRA_VAGRANTFILES=
00:02:22.027  NIC_MODEL=virtio
00:02:22.027  
00:02:22.027  mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt'
00:02:22.027  /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2
00:02:24.568  Bringing machine 'default' up with 'libvirt' provider...
00:02:24.828  ==> default: Creating image (snapshot of base box volume).
00:02:24.828  ==> default: Creating domain with the following settings...
00:02:24.828  ==> default:  -- Name:              fedora39-39-1.5-1721788873-2326_default_1734348230_6083cf5c3cc428e0222c
00:02:24.828  ==> default:  -- Domain type:       kvm
00:02:24.828  ==> default:  -- Cpus:              10
00:02:24.828  ==> default:  -- Feature:           acpi
00:02:24.828  ==> default:  -- Feature:           apic
00:02:24.828  ==> default:  -- Feature:           pae
00:02:24.828  ==> default:  -- Memory:            12288M
00:02:24.828  ==> default:  -- Memory Backing:    hugepages: 
00:02:24.828  ==> default:  -- Management MAC:    
00:02:24.828  ==> default:  -- Loader:            
00:02:24.828  ==> default:  -- Nvram:             
00:02:24.828  ==> default:  -- Base box:          spdk/fedora39
00:02:24.828  ==> default:  -- Storage pool:      default
00:02:24.828  ==> default:  -- Image:             /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734348230_6083cf5c3cc428e0222c.img (20G)
00:02:24.828  ==> default:  -- Volume Cache:      default
00:02:24.828  ==> default:  -- Kernel:            
00:02:24.828  ==> default:  -- Initrd:            
00:02:24.828  ==> default:  -- Graphics Type:     vnc
00:02:24.828  ==> default:  -- Graphics Port:     -1
00:02:24.828  ==> default:  -- Graphics IP:       127.0.0.1
00:02:24.828  ==> default:  -- Graphics Password: Not defined
00:02:24.828  ==> default:  -- Video Type:        cirrus
00:02:24.828  ==> default:  -- Video VRAM:        9216
00:02:24.828  ==> default:  -- Sound Type:	
00:02:24.828  ==> default:  -- Keymap:            en-us
00:02:24.828  ==> default:  -- TPM Path:          
00:02:24.828  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:02:24.828  ==> default:  -- Command line args: 
00:02:24.828  ==> default:     -> value=-device, 
00:02:24.828  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:02:24.828  ==> default:     -> value=-drive, 
00:02:24.828  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 
00:02:24.828  ==> default:     -> value=-device, 
00:02:24.828  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:24.828  ==> default:     -> value=-device, 
00:02:24.828  ==> default:     -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 
00:02:24.828  ==> default:     -> value=-drive, 
00:02:24.828  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 
00:02:24.828  ==> default:     -> value=-device, 
00:02:24.828  ==> default:     -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:24.829  ==> default:     -> value=-drive, 
00:02:24.829  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 
00:02:24.829  ==> default:     -> value=-device, 
00:02:24.829  ==> default:     -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:24.829  ==> default:     -> value=-drive, 
00:02:24.829  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 
00:02:24.829  ==> default:     -> value=-device, 
00:02:24.829  ==> default:     -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:25.094  ==> default: Creating shared folders metadata...
00:02:25.094  ==> default: Starting domain.
00:02:26.481  ==> default: Waiting for domain to get an IP address...
00:02:44.582  ==> default: Waiting for SSH to become available...
00:02:44.582  ==> default: Configuring and enabling network interfaces...
00:02:49.857      default: SSH address: 192.168.121.75:22
00:02:49.857      default: SSH username: vagrant
00:02:49.857      default: SSH auth method: private key
00:02:52.395  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk
00:03:00.515  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk
00:03:07.120  ==> default: Mounting SSHFS shared folder...
00:03:09.024  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output
00:03:09.024  ==> default: Checking Mount..
00:03:10.933  ==> default: Folder Successfully Mounted!
00:03:10.933  ==> default: Running provisioner: file...
00:03:11.870      default: ~/.gitconfig => .gitconfig
00:03:12.438  
00:03:12.438    SUCCESS!
00:03:12.438  
00:03:12.438    cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use.
00:03:12.438    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:03:12.438    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm.
00:03:12.438  
00:03:12.447  [Pipeline] }
00:03:12.462  [Pipeline] // stage
00:03:12.471  [Pipeline] dir
00:03:12.472  Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt
00:03:12.473  [Pipeline] {
00:03:12.486  [Pipeline] catchError
00:03:12.488  [Pipeline] {
00:03:12.501  [Pipeline] sh
00:03:12.786  + vagrant ssh-config --host vagrant
00:03:12.786  + sed -ne /^Host/,$p
00:03:12.786  + tee ssh_conf
00:03:15.322  Host vagrant
00:03:15.322    HostName 192.168.121.75
00:03:15.322    User vagrant
00:03:15.322    Port 22
00:03:15.322    UserKnownHostsFile /dev/null
00:03:15.322    StrictHostKeyChecking no
00:03:15.322    PasswordAuthentication no
00:03:15.322    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39
00:03:15.322    IdentitiesOnly yes
00:03:15.322    LogLevel FATAL
00:03:15.322    ForwardAgent yes
00:03:15.322    ForwardX11 yes
00:03:15.322  
00:03:15.335  [Pipeline] withEnv
00:03:15.337  [Pipeline] {
00:03:15.350  [Pipeline] sh
00:03:15.632  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:03:15.632  		source /etc/os-release
00:03:15.632  		[[ -e /image.version ]] && img=$(< /image.version)
00:03:15.632  		# Minimal, systemd-like check.
00:03:15.632  		if [[ -e /.dockerenv ]]; then
00:03:15.632  			# Clear garbage from the node's name:
00:03:15.632  			#  agt-er_autotest_547-896 -> autotest_547-896
00:03:15.632  			#  $HOSTNAME is the actual container id
00:03:15.632  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:03:15.632  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:03:15.632  				# We can assume this is a mount from a host where container is running,
00:03:15.632  				# so fetch its hostname to easily identify the target swarm worker.
00:03:15.632  				container="$(< /etc/hostname) ($agent)"
00:03:15.632  			else
00:03:15.632  				# Fallback
00:03:15.632  				container=$agent
00:03:15.632  			fi
00:03:15.632  		fi
00:03:15.632  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:03:15.632  
00:03:15.901  [Pipeline] }
00:03:15.916  [Pipeline] // withEnv
00:03:15.924  [Pipeline] setCustomBuildProperty
00:03:15.936  [Pipeline] stage
00:03:15.938  [Pipeline] { (Tests)
00:03:15.953  [Pipeline] sh
00:03:16.236  + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:03:16.511  [Pipeline] sh
00:03:16.792  + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:03:17.068  [Pipeline] timeout
00:03:17.068  Timeout set to expire in 1 hr 30 min
00:03:17.070  [Pipeline] {
00:03:17.083  [Pipeline] sh
00:03:17.364  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:03:18.368  HEAD is now at b18e1bd62 version: v24.09.1-pre
00:03:18.379  [Pipeline] sh
00:03:18.662  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:03:18.936  [Pipeline] sh
00:03:19.217  + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:03:19.489  [Pipeline] sh
00:03:19.772  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo
00:03:20.031  ++ readlink -f spdk_repo
00:03:20.031  + DIR_ROOT=/home/vagrant/spdk_repo
00:03:20.031  + [[ -n /home/vagrant/spdk_repo ]]
00:03:20.031  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:03:20.031  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:03:20.031  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:03:20.031  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:03:20.031  + [[ -d /home/vagrant/spdk_repo/output ]]
00:03:20.031  + [[ raid-vg-autotest == pkgdep-* ]]
00:03:20.031  + cd /home/vagrant/spdk_repo
00:03:20.031  + source /etc/os-release
00:03:20.031  ++ NAME='Fedora Linux'
00:03:20.031  ++ VERSION='39 (Cloud Edition)'
00:03:20.031  ++ ID=fedora
00:03:20.031  ++ VERSION_ID=39
00:03:20.031  ++ VERSION_CODENAME=
00:03:20.031  ++ PLATFORM_ID=platform:f39
00:03:20.031  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:03:20.031  ++ ANSI_COLOR='0;38;2;60;110;180'
00:03:20.031  ++ LOGO=fedora-logo-icon
00:03:20.031  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:03:20.031  ++ HOME_URL=https://fedoraproject.org/
00:03:20.031  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:03:20.032  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:03:20.032  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:03:20.032  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:03:20.032  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:03:20.032  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:03:20.032  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:03:20.032  ++ SUPPORT_END=2024-11-12
00:03:20.032  ++ VARIANT='Cloud Edition'
00:03:20.032  ++ VARIANT_ID=cloud
00:03:20.032  + uname -a
00:03:20.032  Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:03:20.032  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:03:20.599  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:03:20.599  Hugepages
00:03:20.599  node     hugesize     free /  total
00:03:20.599  node0   1048576kB        0 /      0
00:03:20.599  node0      2048kB        0 /      0
00:03:20.599  
00:03:20.599  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:20.599  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:03:20.599  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:03:20.599  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1 nvme1n2 nvme1n3
00:03:20.599  + rm -f /tmp/spdk-ld-path
00:03:20.599  + source autorun-spdk.conf
00:03:20.599  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:03:20.599  ++ SPDK_RUN_ASAN=1
00:03:20.599  ++ SPDK_RUN_UBSAN=1
00:03:20.599  ++ SPDK_TEST_RAID=1
00:03:20.599  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:03:20.599  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:03:20.599  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:03:20.599  ++ RUN_NIGHTLY=1
00:03:20.599  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:03:20.599  + [[ -n '' ]]
00:03:20.599  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:03:20.599  + for M in /var/spdk/build-*-manifest.txt
00:03:20.599  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:03:20.599  + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/
00:03:20.599  + for M in /var/spdk/build-*-manifest.txt
00:03:20.599  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:03:20.599  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:03:20.599  + for M in /var/spdk/build-*-manifest.txt
00:03:20.599  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:03:20.599  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:03:20.858  ++ uname
00:03:20.858  + [[ Linux == \L\i\n\u\x ]]
00:03:20.859  + sudo dmesg -T
00:03:20.859  + sudo dmesg --clear
00:03:20.859  + dmesg_pid=6167
00:03:20.859  + [[ Fedora Linux == FreeBSD ]]
00:03:20.859  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:20.859  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:20.859  + sudo dmesg -Tw
00:03:20.859  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:03:20.859  + [[ -x /usr/src/fio-static/fio ]]
00:03:20.859  + export FIO_BIN=/usr/src/fio-static/fio
00:03:20.859  + FIO_BIN=/usr/src/fio-static/fio
00:03:20.859  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:03:20.859  + [[ ! -v VFIO_QEMU_BIN ]]
00:03:20.859  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:03:20.859  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:03:20.859  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:03:20.859  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:03:20.859  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:03:20.859  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:03:20.859  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:20.859  Test configuration:
00:03:20.859  SPDK_RUN_FUNCTIONAL_TEST=1
00:03:20.859  SPDK_RUN_ASAN=1
00:03:20.859  SPDK_RUN_UBSAN=1
00:03:20.859  SPDK_TEST_RAID=1
00:03:20.859  SPDK_TEST_NATIVE_DPDK=v23.11
00:03:20.859  SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:03:20.859  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:03:20.859  RUN_NIGHTLY=1   11:24:46  -- common/autotest_common.sh@1680 -- $ [[ n == y ]]
00:03:20.859    11:24:46  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:03:20.859     11:24:46  -- scripts/common.sh@15 -- $ shopt -s extglob
00:03:20.859     11:24:46  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:03:20.859     11:24:46  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:20.859     11:24:46  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:20.859      11:24:46  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:20.859      11:24:46  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:20.859      11:24:46  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:20.859      11:24:46  -- paths/export.sh@5 -- $ export PATH
00:03:20.859      11:24:46  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:20.859    11:24:46  -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:03:20.859      11:24:46  -- common/autobuild_common.sh@479 -- $ date +%s
00:03:20.859     11:24:46  -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734348286.XXXXXX
00:03:20.859    11:24:46  -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734348286.i49vaH
00:03:20.859    11:24:46  -- common/autobuild_common.sh@481 -- $ [[ -n '' ]]
00:03:20.859    11:24:46  -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']'
00:03:20.859     11:24:46  -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:03:20.859    11:24:46  -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk'
00:03:20.859    11:24:46  -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:03:20.859    11:24:46  -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp  --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:03:20.859     11:24:46  -- common/autobuild_common.sh@495 -- $ get_config_params
00:03:20.859     11:24:46  -- common/autotest_common.sh@407 -- $ xtrace_disable
00:03:20.859     11:24:46  -- common/autotest_common.sh@10 -- $ set +x
00:03:21.118    11:24:46  -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build'
00:03:21.118    11:24:46  -- common/autobuild_common.sh@497 -- $ start_monitor_resources
00:03:21.118    11:24:46  -- pm/common@17 -- $ local monitor
00:03:21.118    11:24:46  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:21.118    11:24:46  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:21.118    11:24:46  -- pm/common@25 -- $ sleep 1
00:03:21.118     11:24:46  -- pm/common@21 -- $ date +%s
00:03:21.118     11:24:46  -- pm/common@21 -- $ date +%s
00:03:21.118    11:24:46  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734348286
00:03:21.118    11:24:46  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734348286
00:03:21.118  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734348286_collect-cpu-load.pm.log
00:03:21.118  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734348286_collect-vmstat.pm.log
00:03:22.056    11:24:47  -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT
00:03:22.056   11:24:47  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:03:22.056   11:24:47  -- spdk/autobuild.sh@12 -- $ umask 022
00:03:22.056   11:24:47  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:03:22.056   11:24:47  -- spdk/autobuild.sh@16 -- $ date -u
00:03:22.056  Mon Dec 16 11:24:47 AM UTC 2024
00:03:22.056   11:24:47  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:03:22.056  v24.09-1-gb18e1bd62
00:03:22.056   11:24:47  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:03:22.056   11:24:47  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:03:22.056   11:24:47  -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']'
00:03:22.056   11:24:47  -- common/autotest_common.sh@1107 -- $ xtrace_disable
00:03:22.056   11:24:47  -- common/autotest_common.sh@10 -- $ set +x
00:03:22.056  ************************************
00:03:22.056  START TEST asan
00:03:22.056  ************************************
00:03:22.056  using asan
00:03:22.056   11:24:47 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan'
00:03:22.056  
00:03:22.056  real	0m0.001s
00:03:22.056  user	0m0.000s
00:03:22.056  sys	0m0.000s
00:03:22.056   11:24:47 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable
00:03:22.056   11:24:47 asan -- common/autotest_common.sh@10 -- $ set +x
00:03:22.056  ************************************
00:03:22.056  END TEST asan
00:03:22.056  ************************************
00:03:22.056   11:24:48  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:03:22.056   11:24:48  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:03:22.056   11:24:48  -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']'
00:03:22.056   11:24:48  -- common/autotest_common.sh@1107 -- $ xtrace_disable
00:03:22.056   11:24:48  -- common/autotest_common.sh@10 -- $ set +x
00:03:22.056  ************************************
00:03:22.056  START TEST ubsan
00:03:22.056  ************************************
00:03:22.056  using ubsan
00:03:22.056   11:24:48 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan'
00:03:22.056  
00:03:22.056  real	0m0.001s
00:03:22.056  user	0m0.001s
00:03:22.056  sys	0m0.000s
00:03:22.056   11:24:48 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable
00:03:22.056   11:24:48 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:03:22.056  ************************************
00:03:22.056  END TEST ubsan
00:03:22.056  ************************************
00:03:22.056   11:24:48  -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']'
00:03:22.056   11:24:48  -- spdk/autobuild.sh@28 -- $ build_native_dpdk
00:03:22.056   11:24:48  -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk
00:03:22.056   11:24:48  -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']'
00:03:22.056   11:24:48  -- common/autotest_common.sh@1107 -- $ xtrace_disable
00:03:22.056   11:24:48  -- common/autotest_common.sh@10 -- $ set +x
00:03:22.056  ************************************
00:03:22.056  START TEST build_native_dpdk
00:03:22.056  ************************************
00:03:22.056   11:24:48 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]]
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]]
00:03:22.057    11:24:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13
00:03:22.057   11:24:48 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build
00:03:22.317    11:24:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5
00:03:22.317  eeb0605f11 version: 23.11.0
00:03:22.317  238778122a doc: update release notes for 23.11
00:03:22.317  46aa6b3cfc doc: fix description of RSS features
00:03:22.317  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:03:22.317  7e421ae345 devtools: support skipping forbid rule check
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon'
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags=
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror'
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow'
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base")
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]]
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk
00:03:22.317    11:24:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']'
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@367 -- $ return 1
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1
00:03:22.317  patching file config/rte_config.h
00:03:22.317  Hunk #1 succeeded at 60 (offset 1 line).
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 0
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1
00:03:22.317  patching file lib/pcapng/rte_pcapng.c
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>='
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@348 -- $ : 1
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:03:22.317    11:24:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:03:22.317   11:24:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 1
00:03:22.317   11:24:48 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false
00:03:22.317    11:24:48 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s
00:03:22.318   11:24:48 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']'
00:03:22.318    11:24:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base
00:03:22.318   11:24:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:03:28.892  The Meson build system
00:03:28.892  Version: 1.5.0
00:03:28.892  Source dir: /home/vagrant/spdk_repo/dpdk
00:03:28.892  Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp
00:03:28.892  Build type: native build
00:03:28.892  Program cat found: YES (/usr/bin/cat)
00:03:28.892  Project name: DPDK
00:03:28.892  Project version: 23.11.0
00:03:28.892  C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:03:28.892  C linker for the host machine: gcc ld.bfd 2.40-14
00:03:28.892  Host machine cpu family: x86_64
00:03:28.892  Host machine cpu: x86_64
00:03:28.892  Message: ## Building in Developer Mode ##
00:03:28.892  Program pkg-config found: YES (/usr/bin/pkg-config)
00:03:28.892  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh)
00:03:28.892  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh)
00:03:28.892  Program python3 found: YES (/usr/bin/python3)
00:03:28.892  Program cat found: YES (/usr/bin/cat)
00:03:28.892  config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead.
00:03:28.892  Compiler for C supports arguments -march=native: YES 
00:03:28.892  Checking for size of "void *" : 8 
00:03:28.892  Checking for size of "void *" : 8 (cached)
00:03:28.892  Library m found: YES
00:03:28.892  Library numa found: YES
00:03:28.892  Has header "numaif.h" : YES 
00:03:28.892  Library fdt found: NO
00:03:28.892  Library execinfo found: NO
00:03:28.892  Has header "execinfo.h" : YES 
00:03:28.892  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:03:28.892  Run-time dependency libarchive found: NO (tried pkgconfig)
00:03:28.892  Run-time dependency libbsd found: NO (tried pkgconfig)
00:03:28.892  Run-time dependency jansson found: NO (tried pkgconfig)
00:03:28.892  Run-time dependency openssl found: YES 3.1.1
00:03:28.892  Run-time dependency libpcap found: YES 1.10.4
00:03:28.892  Has header "pcap.h" with dependency libpcap: YES 
00:03:28.892  Compiler for C supports arguments -Wcast-qual: YES 
00:03:28.892  Compiler for C supports arguments -Wdeprecated: YES 
00:03:28.892  Compiler for C supports arguments -Wformat: YES 
00:03:28.892  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:03:28.892  Compiler for C supports arguments -Wformat-security: NO 
00:03:28.892  Compiler for C supports arguments -Wmissing-declarations: YES 
00:03:28.892  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:03:28.892  Compiler for C supports arguments -Wnested-externs: YES 
00:03:28.892  Compiler for C supports arguments -Wold-style-definition: YES 
00:03:28.892  Compiler for C supports arguments -Wpointer-arith: YES 
00:03:28.892  Compiler for C supports arguments -Wsign-compare: YES 
00:03:28.892  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:03:28.892  Compiler for C supports arguments -Wundef: YES 
00:03:28.892  Compiler for C supports arguments -Wwrite-strings: YES 
00:03:28.892  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:03:28.892  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:03:28.892  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:03:28.892  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:03:28.892  Program objdump found: YES (/usr/bin/objdump)
00:03:28.892  Compiler for C supports arguments -mavx512f: YES 
00:03:28.892  Checking if "AVX512 checking" compiles: YES 
00:03:28.892  Fetching value of define "__SSE4_2__" : 1 
00:03:28.892  Fetching value of define "__AES__" : 1 
00:03:28.892  Fetching value of define "__AVX__" : 1 
00:03:28.892  Fetching value of define "__AVX2__" : 1 
00:03:28.892  Fetching value of define "__AVX512BW__" : 1 
00:03:28.892  Fetching value of define "__AVX512CD__" : 1 
00:03:28.892  Fetching value of define "__AVX512DQ__" : 1 
00:03:28.892  Fetching value of define "__AVX512F__" : 1 
00:03:28.892  Fetching value of define "__AVX512VL__" : 1 
00:03:28.892  Fetching value of define "__PCLMUL__" : 1 
00:03:28.892  Fetching value of define "__RDRND__" : 1 
00:03:28.892  Fetching value of define "__RDSEED__" : 1 
00:03:28.892  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:03:28.892  Fetching value of define "__znver1__" : (undefined) 
00:03:28.892  Fetching value of define "__znver2__" : (undefined) 
00:03:28.892  Fetching value of define "__znver3__" : (undefined) 
00:03:28.892  Fetching value of define "__znver4__" : (undefined) 
00:03:28.892  Compiler for C supports arguments -Wno-format-truncation: YES 
00:03:28.892  Message: lib/log: Defining dependency "log"
00:03:28.892  Message: lib/kvargs: Defining dependency "kvargs"
00:03:28.892  Message: lib/telemetry: Defining dependency "telemetry"
00:03:28.892  Checking for function "getentropy" : NO 
00:03:28.892  Message: lib/eal: Defining dependency "eal"
00:03:28.892  Message: lib/ring: Defining dependency "ring"
00:03:28.892  Message: lib/rcu: Defining dependency "rcu"
00:03:28.892  Message: lib/mempool: Defining dependency "mempool"
00:03:28.892  Message: lib/mbuf: Defining dependency "mbuf"
00:03:28.892  Fetching value of define "__PCLMUL__" : 1 (cached)
00:03:28.892  Fetching value of define "__AVX512F__" : 1 (cached)
00:03:28.892  Fetching value of define "__AVX512BW__" : 1 (cached)
00:03:28.892  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:03:28.892  Fetching value of define "__AVX512VL__" : 1 (cached)
00:03:28.892  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:03:28.892  Compiler for C supports arguments -mpclmul: YES 
00:03:28.892  Compiler for C supports arguments -maes: YES 
00:03:28.892  Compiler for C supports arguments -mavx512f: YES (cached)
00:03:28.892  Compiler for C supports arguments -mavx512bw: YES 
00:03:28.892  Compiler for C supports arguments -mavx512dq: YES 
00:03:28.892  Compiler for C supports arguments -mavx512vl: YES 
00:03:28.892  Compiler for C supports arguments -mvpclmulqdq: YES 
00:03:28.892  Compiler for C supports arguments -mavx2: YES 
00:03:28.892  Compiler for C supports arguments -mavx: YES 
00:03:28.892  Message: lib/net: Defining dependency "net"
00:03:28.892  Message: lib/meter: Defining dependency "meter"
00:03:28.892  Message: lib/ethdev: Defining dependency "ethdev"
00:03:28.892  Message: lib/pci: Defining dependency "pci"
00:03:28.892  Message: lib/cmdline: Defining dependency "cmdline"
00:03:28.892  Message: lib/metrics: Defining dependency "metrics"
00:03:28.892  Message: lib/hash: Defining dependency "hash"
00:03:28.892  Message: lib/timer: Defining dependency "timer"
00:03:28.892  Fetching value of define "__AVX512F__" : 1 (cached)
00:03:28.892  Fetching value of define "__AVX512VL__" : 1 (cached)
00:03:28.892  Fetching value of define "__AVX512CD__" : 1 (cached)
00:03:28.892  Fetching value of define "__AVX512BW__" : 1 (cached)
00:03:28.892  Message: lib/acl: Defining dependency "acl"
00:03:28.892  Message: lib/bbdev: Defining dependency "bbdev"
00:03:28.892  Message: lib/bitratestats: Defining dependency "bitratestats"
00:03:28.892  Run-time dependency libelf found: YES 0.191
00:03:28.893  Message: lib/bpf: Defining dependency "bpf"
00:03:28.893  Message: lib/cfgfile: Defining dependency "cfgfile"
00:03:28.893  Message: lib/compressdev: Defining dependency "compressdev"
00:03:28.893  Message: lib/cryptodev: Defining dependency "cryptodev"
00:03:28.893  Message: lib/distributor: Defining dependency "distributor"
00:03:28.893  Message: lib/dmadev: Defining dependency "dmadev"
00:03:28.893  Message: lib/efd: Defining dependency "efd"
00:03:28.893  Message: lib/eventdev: Defining dependency "eventdev"
00:03:28.893  Message: lib/dispatcher: Defining dependency "dispatcher"
00:03:28.893  Message: lib/gpudev: Defining dependency "gpudev"
00:03:28.893  Message: lib/gro: Defining dependency "gro"
00:03:28.893  Message: lib/gso: Defining dependency "gso"
00:03:28.893  Message: lib/ip_frag: Defining dependency "ip_frag"
00:03:28.893  Message: lib/jobstats: Defining dependency "jobstats"
00:03:28.893  Message: lib/latencystats: Defining dependency "latencystats"
00:03:28.893  Message: lib/lpm: Defining dependency "lpm"
00:03:28.893  Fetching value of define "__AVX512F__" : 1 (cached)
00:03:28.893  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:03:28.893  Fetching value of define "__AVX512IFMA__" : (undefined) 
00:03:28.893  Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 
00:03:28.893  Message: lib/member: Defining dependency "member"
00:03:28.893  Message: lib/pcapng: Defining dependency "pcapng"
00:03:28.893  Compiler for C supports arguments -Wno-cast-qual: YES 
00:03:28.893  Message: lib/power: Defining dependency "power"
00:03:28.893  Message: lib/rawdev: Defining dependency "rawdev"
00:03:28.893  Message: lib/regexdev: Defining dependency "regexdev"
00:03:28.893  Message: lib/mldev: Defining dependency "mldev"
00:03:28.893  Message: lib/rib: Defining dependency "rib"
00:03:28.893  Message: lib/reorder: Defining dependency "reorder"
00:03:28.893  Message: lib/sched: Defining dependency "sched"
00:03:28.893  Message: lib/security: Defining dependency "security"
00:03:28.893  Message: lib/stack: Defining dependency "stack"
00:03:28.893  Has header "linux/userfaultfd.h" : YES 
00:03:28.893  Has header "linux/vduse.h" : YES 
00:03:28.893  Message: lib/vhost: Defining dependency "vhost"
00:03:28.893  Message: lib/ipsec: Defining dependency "ipsec"
00:03:28.893  Message: lib/pdcp: Defining dependency "pdcp"
00:03:28.893  Fetching value of define "__AVX512F__" : 1 (cached)
00:03:28.893  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:03:28.893  Fetching value of define "__AVX512BW__" : 1 (cached)
00:03:28.893  Message: lib/fib: Defining dependency "fib"
00:03:28.893  Message: lib/port: Defining dependency "port"
00:03:28.893  Message: lib/pdump: Defining dependency "pdump"
00:03:28.893  Message: lib/table: Defining dependency "table"
00:03:28.893  Message: lib/pipeline: Defining dependency "pipeline"
00:03:28.893  Message: lib/graph: Defining dependency "graph"
00:03:28.893  Message: lib/node: Defining dependency "node"
00:03:28.893  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:03:28.893  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:03:28.893  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:03:29.828  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:03:29.828  Compiler for C supports arguments -Wno-sign-compare: YES 
00:03:29.828  Compiler for C supports arguments -Wno-unused-value: YES 
00:03:29.828  Compiler for C supports arguments -Wno-format: YES 
00:03:29.828  Compiler for C supports arguments -Wno-format-security: YES 
00:03:29.829  Compiler for C supports arguments -Wno-format-nonliteral: YES 
00:03:29.829  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:03:29.829  Compiler for C supports arguments -Wno-unused-but-set-variable: YES 
00:03:29.829  Compiler for C supports arguments -Wno-unused-parameter: YES 
00:03:29.829  Fetching value of define "__AVX512F__" : 1 (cached)
00:03:29.829  Fetching value of define "__AVX512BW__" : 1 (cached)
00:03:29.829  Compiler for C supports arguments -mavx512f: YES (cached)
00:03:29.829  Compiler for C supports arguments -mavx512bw: YES (cached)
00:03:29.829  Compiler for C supports arguments -march=skylake-avx512: YES 
00:03:29.829  Message: drivers/net/i40e: Defining dependency "net_i40e"
00:03:29.829  Has header "sys/epoll.h" : YES 
00:03:29.829  Program doxygen found: YES (/usr/local/bin/doxygen)
00:03:29.829  Configuring doxy-api-html.conf using configuration
00:03:29.829  Configuring doxy-api-man.conf using configuration
00:03:29.829  Program mandb found: YES (/usr/bin/mandb)
00:03:29.829  Program sphinx-build found: NO
00:03:29.829  Configuring rte_build_config.h using configuration
00:03:29.829  Message: 
00:03:29.829  =================
00:03:29.829  Applications Enabled
00:03:29.829  =================
00:03:29.829  
00:03:29.829  apps:
00:03:29.829  	dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 
00:03:29.829  	test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 
00:03:29.829  	test-pmd, test-regex, test-sad, test-security-perf, 
00:03:29.829  
00:03:29.829  Message: 
00:03:29.829  =================
00:03:29.829  Libraries Enabled
00:03:29.829  =================
00:03:29.829  
00:03:29.829  libs:
00:03:29.829  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:03:29.829  	net, meter, ethdev, pci, cmdline, metrics, hash, timer, 
00:03:29.829  	acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 
00:03:29.829  	dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 
00:03:29.829  	jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 
00:03:29.829  	mldev, rib, reorder, sched, security, stack, vhost, ipsec, 
00:03:29.829  	pdcp, fib, port, pdump, table, pipeline, graph, node, 
00:03:29.829  	
00:03:29.829  
00:03:29.829  Message: 
00:03:29.829  ===============
00:03:29.829  Drivers Enabled
00:03:29.829  ===============
00:03:29.829  
00:03:29.829  common:
00:03:29.829  	
00:03:29.829  bus:
00:03:29.829  	pci, vdev, 
00:03:29.829  mempool:
00:03:29.829  	ring, 
00:03:29.829  dma:
00:03:29.829  	
00:03:29.829  net:
00:03:29.829  	i40e, 
00:03:29.829  raw:
00:03:29.829  	
00:03:29.829  crypto:
00:03:29.829  	
00:03:29.829  compress:
00:03:29.829  	
00:03:29.829  regex:
00:03:29.829  	
00:03:29.829  ml:
00:03:29.829  	
00:03:29.829  vdpa:
00:03:29.829  	
00:03:29.829  event:
00:03:29.829  	
00:03:29.829  baseband:
00:03:29.829  	
00:03:29.829  gpu:
00:03:29.829  	
00:03:29.829  
00:03:29.829  Message: 
00:03:29.829  =================
00:03:29.829  Content Skipped
00:03:29.829  =================
00:03:29.829  
00:03:29.829  apps:
00:03:29.829  	
00:03:29.829  libs:
00:03:29.829  	
00:03:29.829  drivers:
00:03:29.829  	common/cpt:	not in enabled drivers build config
00:03:29.829  	common/dpaax:	not in enabled drivers build config
00:03:29.829  	common/iavf:	not in enabled drivers build config
00:03:29.829  	common/idpf:	not in enabled drivers build config
00:03:29.829  	common/mvep:	not in enabled drivers build config
00:03:29.829  	common/octeontx:	not in enabled drivers build config
00:03:29.829  	bus/auxiliary:	not in enabled drivers build config
00:03:29.829  	bus/cdx:	not in enabled drivers build config
00:03:29.829  	bus/dpaa:	not in enabled drivers build config
00:03:29.829  	bus/fslmc:	not in enabled drivers build config
00:03:29.829  	bus/ifpga:	not in enabled drivers build config
00:03:29.829  	bus/platform:	not in enabled drivers build config
00:03:29.829  	bus/vmbus:	not in enabled drivers build config
00:03:29.829  	common/cnxk:	not in enabled drivers build config
00:03:29.829  	common/mlx5:	not in enabled drivers build config
00:03:29.829  	common/nfp:	not in enabled drivers build config
00:03:29.829  	common/qat:	not in enabled drivers build config
00:03:29.829  	common/sfc_efx:	not in enabled drivers build config
00:03:29.829  	mempool/bucket:	not in enabled drivers build config
00:03:29.829  	mempool/cnxk:	not in enabled drivers build config
00:03:29.829  	mempool/dpaa:	not in enabled drivers build config
00:03:29.829  	mempool/dpaa2:	not in enabled drivers build config
00:03:29.829  	mempool/octeontx:	not in enabled drivers build config
00:03:29.829  	mempool/stack:	not in enabled drivers build config
00:03:29.829  	dma/cnxk:	not in enabled drivers build config
00:03:29.829  	dma/dpaa:	not in enabled drivers build config
00:03:29.829  	dma/dpaa2:	not in enabled drivers build config
00:03:29.829  	dma/hisilicon:	not in enabled drivers build config
00:03:29.829  	dma/idxd:	not in enabled drivers build config
00:03:29.829  	dma/ioat:	not in enabled drivers build config
00:03:29.829  	dma/skeleton:	not in enabled drivers build config
00:03:29.829  	net/af_packet:	not in enabled drivers build config
00:03:29.829  	net/af_xdp:	not in enabled drivers build config
00:03:29.829  	net/ark:	not in enabled drivers build config
00:03:29.829  	net/atlantic:	not in enabled drivers build config
00:03:29.829  	net/avp:	not in enabled drivers build config
00:03:29.829  	net/axgbe:	not in enabled drivers build config
00:03:29.829  	net/bnx2x:	not in enabled drivers build config
00:03:29.829  	net/bnxt:	not in enabled drivers build config
00:03:29.829  	net/bonding:	not in enabled drivers build config
00:03:29.829  	net/cnxk:	not in enabled drivers build config
00:03:29.829  	net/cpfl:	not in enabled drivers build config
00:03:29.829  	net/cxgbe:	not in enabled drivers build config
00:03:29.829  	net/dpaa:	not in enabled drivers build config
00:03:29.829  	net/dpaa2:	not in enabled drivers build config
00:03:29.829  	net/e1000:	not in enabled drivers build config
00:03:29.829  	net/ena:	not in enabled drivers build config
00:03:29.829  	net/enetc:	not in enabled drivers build config
00:03:29.829  	net/enetfec:	not in enabled drivers build config
00:03:29.829  	net/enic:	not in enabled drivers build config
00:03:29.829  	net/failsafe:	not in enabled drivers build config
00:03:29.829  	net/fm10k:	not in enabled drivers build config
00:03:29.829  	net/gve:	not in enabled drivers build config
00:03:29.829  	net/hinic:	not in enabled drivers build config
00:03:29.829  	net/hns3:	not in enabled drivers build config
00:03:29.829  	net/iavf:	not in enabled drivers build config
00:03:29.829  	net/ice:	not in enabled drivers build config
00:03:29.829  	net/idpf:	not in enabled drivers build config
00:03:29.829  	net/igc:	not in enabled drivers build config
00:03:29.829  	net/ionic:	not in enabled drivers build config
00:03:29.829  	net/ipn3ke:	not in enabled drivers build config
00:03:29.829  	net/ixgbe:	not in enabled drivers build config
00:03:29.829  	net/mana:	not in enabled drivers build config
00:03:29.829  	net/memif:	not in enabled drivers build config
00:03:29.829  	net/mlx4:	not in enabled drivers build config
00:03:29.829  	net/mlx5:	not in enabled drivers build config
00:03:29.829  	net/mvneta:	not in enabled drivers build config
00:03:29.829  	net/mvpp2:	not in enabled drivers build config
00:03:29.829  	net/netvsc:	not in enabled drivers build config
00:03:29.829  	net/nfb:	not in enabled drivers build config
00:03:29.829  	net/nfp:	not in enabled drivers build config
00:03:29.829  	net/ngbe:	not in enabled drivers build config
00:03:29.829  	net/null:	not in enabled drivers build config
00:03:29.829  	net/octeontx:	not in enabled drivers build config
00:03:29.829  	net/octeon_ep:	not in enabled drivers build config
00:03:29.829  	net/pcap:	not in enabled drivers build config
00:03:29.829  	net/pfe:	not in enabled drivers build config
00:03:29.829  	net/qede:	not in enabled drivers build config
00:03:29.829  	net/ring:	not in enabled drivers build config
00:03:29.829  	net/sfc:	not in enabled drivers build config
00:03:29.829  	net/softnic:	not in enabled drivers build config
00:03:29.829  	net/tap:	not in enabled drivers build config
00:03:29.829  	net/thunderx:	not in enabled drivers build config
00:03:29.829  	net/txgbe:	not in enabled drivers build config
00:03:29.829  	net/vdev_netvsc:	not in enabled drivers build config
00:03:29.829  	net/vhost:	not in enabled drivers build config
00:03:29.829  	net/virtio:	not in enabled drivers build config
00:03:29.829  	net/vmxnet3:	not in enabled drivers build config
00:03:29.829  	raw/cnxk_bphy:	not in enabled drivers build config
00:03:29.829  	raw/cnxk_gpio:	not in enabled drivers build config
00:03:29.829  	raw/dpaa2_cmdif:	not in enabled drivers build config
00:03:29.829  	raw/ifpga:	not in enabled drivers build config
00:03:29.829  	raw/ntb:	not in enabled drivers build config
00:03:29.829  	raw/skeleton:	not in enabled drivers build config
00:03:29.829  	crypto/armv8:	not in enabled drivers build config
00:03:29.829  	crypto/bcmfs:	not in enabled drivers build config
00:03:29.829  	crypto/caam_jr:	not in enabled drivers build config
00:03:29.829  	crypto/ccp:	not in enabled drivers build config
00:03:29.829  	crypto/cnxk:	not in enabled drivers build config
00:03:29.829  	crypto/dpaa_sec:	not in enabled drivers build config
00:03:29.829  	crypto/dpaa2_sec:	not in enabled drivers build config
00:03:29.829  	crypto/ipsec_mb:	not in enabled drivers build config
00:03:29.829  	crypto/mlx5:	not in enabled drivers build config
00:03:29.829  	crypto/mvsam:	not in enabled drivers build config
00:03:29.829  	crypto/nitrox:	not in enabled drivers build config
00:03:29.829  	crypto/null:	not in enabled drivers build config
00:03:29.829  	crypto/octeontx:	not in enabled drivers build config
00:03:29.829  	crypto/openssl:	not in enabled drivers build config
00:03:29.829  	crypto/scheduler:	not in enabled drivers build config
00:03:29.829  	crypto/uadk:	not in enabled drivers build config
00:03:29.829  	crypto/virtio:	not in enabled drivers build config
00:03:29.829  	compress/isal:	not in enabled drivers build config
00:03:29.829  	compress/mlx5:	not in enabled drivers build config
00:03:29.829  	compress/octeontx:	not in enabled drivers build config
00:03:29.829  	compress/zlib:	not in enabled drivers build config
00:03:29.829  	regex/mlx5:	not in enabled drivers build config
00:03:29.829  	regex/cn9k:	not in enabled drivers build config
00:03:29.829  	ml/cnxk:	not in enabled drivers build config
00:03:29.829  	vdpa/ifc:	not in enabled drivers build config
00:03:29.829  	vdpa/mlx5:	not in enabled drivers build config
00:03:29.829  	vdpa/nfp:	not in enabled drivers build config
00:03:29.829  	vdpa/sfc:	not in enabled drivers build config
00:03:29.829  	event/cnxk:	not in enabled drivers build config
00:03:29.829  	event/dlb2:	not in enabled drivers build config
00:03:29.830  	event/dpaa:	not in enabled drivers build config
00:03:29.830  	event/dpaa2:	not in enabled drivers build config
00:03:29.830  	event/dsw:	not in enabled drivers build config
00:03:29.830  	event/opdl:	not in enabled drivers build config
00:03:29.830  	event/skeleton:	not in enabled drivers build config
00:03:29.830  	event/sw:	not in enabled drivers build config
00:03:29.830  	event/octeontx:	not in enabled drivers build config
00:03:29.830  	baseband/acc:	not in enabled drivers build config
00:03:29.830  	baseband/fpga_5gnr_fec:	not in enabled drivers build config
00:03:29.830  	baseband/fpga_lte_fec:	not in enabled drivers build config
00:03:29.830  	baseband/la12xx:	not in enabled drivers build config
00:03:29.830  	baseband/null:	not in enabled drivers build config
00:03:29.830  	baseband/turbo_sw:	not in enabled drivers build config
00:03:29.830  	gpu/cuda:	not in enabled drivers build config
00:03:29.830  	
00:03:29.830  
00:03:29.830  Build targets in project: 217
00:03:29.830  
00:03:29.830  DPDK 23.11.0
00:03:29.830  
00:03:29.830    User defined options
00:03:29.830      libdir        : lib
00:03:29.830      prefix        : /home/vagrant/spdk_repo/dpdk/build
00:03:29.830      c_args        : -fPIC -g -fcommon -Werror -Wno-stringop-overflow
00:03:29.830      c_link_args   : 
00:03:29.830      enable_docs   : false
00:03:29.830      enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:03:29.830      enable_kmods  : false
00:03:29.830      machine       : native
00:03:29.830      tests         : false
00:03:29.830  
00:03:29.830  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:03:29.830  WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
00:03:29.830   11:24:55 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10
00:03:29.830  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:03:30.089  [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:03:30.089  [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:03:30.089  [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:03:30.089  [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:03:30.089  [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:03:30.089  [6/707] Linking static target lib/librte_kvargs.a
00:03:30.089  [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:03:30.089  [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o
00:03:30.089  [9/707] Linking static target lib/librte_log.a
00:03:30.089  [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:03:30.347  [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:03:30.347  [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:03:30.347  [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:03:30.347  [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:03:30.347  [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:03:30.607  [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:03:30.608  [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:03:30.608  [18/707] Linking target lib/librte_log.so.24.0
00:03:30.608  [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:03:30.608  [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:03:30.608  [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:03:30.608  [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:03:30.608  [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:03:30.867  [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:03:30.867  [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:03:30.867  [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:03:30.867  [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:03:30.867  [28/707] Linking target lib/librte_kvargs.so.24.0
00:03:30.867  [29/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:03:30.867  [30/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:03:30.867  [31/707] Linking static target lib/librte_telemetry.a
00:03:31.126  [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:03:31.126  [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:03:31.126  [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:03:31.126  [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:03:31.126  [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:03:31.126  [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:03:31.126  [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:03:31.386  [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:03:31.386  [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:03:31.386  [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:03:31.386  [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:03:31.386  [43/707] Linking target lib/librte_telemetry.so.24.0
00:03:31.386  [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:03:31.386  [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:03:31.386  [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:03:31.645  [47/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:03:31.645  [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:03:31.645  [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:03:31.645  [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:03:31.645  [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:03:31.904  [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:03:31.904  [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:03:31.904  [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:03:31.904  [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:03:31.904  [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:03:31.904  [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:03:31.904  [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:03:31.904  [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:03:31.904  [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:03:32.164  [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:03:32.164  [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:03:32.164  [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:03:32.164  [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:03:32.164  [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:03:32.164  [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:03:32.164  [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:03:32.164  [68/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:03:32.423  [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:03:32.423  [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:03:32.423  [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:03:32.423  [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:03:32.423  [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:03:32.423  [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:03:32.423  [75/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:03:32.682  [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:03:32.682  [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:03:32.682  [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:03:32.682  [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:03:32.682  [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:03:32.942  [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:03:32.942  [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:03:32.942  [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:03:32.942  [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:03:32.942  [85/707] Linking static target lib/librte_ring.a
00:03:32.942  [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:03:33.202  [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:03:33.202  [88/707] Linking static target lib/librte_eal.a
00:03:33.202  [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:03:33.202  [90/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.202  [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:03:33.202  [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:03:33.462  [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:03:33.462  [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:03:33.462  [95/707] Linking static target lib/librte_mempool.a
00:03:33.462  [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:03:33.462  [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:03:33.462  [98/707] Linking static target lib/librte_rcu.a
00:03:33.720  [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:03:33.720  [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a
00:03:33.720  [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:03:33.720  [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:03:33.720  [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:03:33.720  [104/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.979  [105/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:03:33.979  [106/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:03:33.979  [107/707] Linking static target lib/librte_net.a
00:03:33.979  [108/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.979  [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:03:33.979  [110/707] Linking static target lib/librte_meter.a
00:03:33.979  [111/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:03:33.979  [112/707] Linking static target lib/librte_mbuf.a
00:03:34.237  [113/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.237  [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:03:34.237  [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:03:34.237  [116/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.237  [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:03:34.237  [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:03:34.496  [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.756  [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:03:34.756  [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:03:35.014  [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:03:35.014  [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:03:35.014  [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:03:35.014  [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:03:35.274  [126/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:03:35.274  [127/707] Linking static target lib/librte_pci.a
00:03:35.274  [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:03:35.274  [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:03:35.274  [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:03:35.274  [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:03:35.274  [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:03:35.274  [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:35.274  [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:03:35.274  [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:03:35.535  [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:03:35.535  [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:03:35.535  [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:03:35.535  [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:03:35.535  [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:03:35.535  [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:03:35.535  [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:03:35.794  [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:03:35.794  [144/707] Linking static target lib/librte_cmdline.a
00:03:35.794  [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:03:35.794  [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o
00:03:35.794  [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o
00:03:35.794  [148/707] Linking static target lib/librte_metrics.a
00:03:36.052  [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:03:36.052  [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:03:36.311  [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.311  [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:03:36.311  [153/707] Linking static target lib/librte_timer.a
00:03:36.569  [154/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:03:36.569  [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.569  [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o
00:03:36.828  [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.828  [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o
00:03:36.828  [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o
00:03:37.087  [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o
00:03:37.346  [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o
00:03:37.346  [162/707] Linking static target lib/librte_bitratestats.a
00:03:37.346  [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o
00:03:37.606  [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:37.606  [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o
00:03:37.606  [166/707] Linking static target lib/librte_bbdev.a
00:03:37.606  [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o
00:03:37.864  [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o
00:03:38.123  [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o
00:03:38.123  [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o
00:03:38.123  [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:03:38.123  [172/707] Linking static target lib/librte_hash.a
00:03:38.123  [173/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:38.382  [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:03:38.382  [175/707] Linking static target lib/librte_ethdev.a
00:03:38.382  [176/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o
00:03:38.382  [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o
00:03:38.382  [178/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o
00:03:38.382  [179/707] Linking static target lib/acl/libavx2_tmp.a
00:03:38.642  [180/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:03:38.642  [181/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o
00:03:38.642  [182/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o
00:03:38.901  [183/707] Linking static target lib/librte_cfgfile.a
00:03:38.901  [184/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:38.901  [185/707] Linking target lib/librte_eal.so.24.0
00:03:38.901  [186/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o
00:03:39.160  [187/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:03:39.160  [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o
00:03:39.160  [189/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output)
00:03:39.160  [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o
00:03:39.160  [191/707] Linking target lib/librte_ring.so.24.0
00:03:39.160  [192/707] Linking target lib/librte_meter.so.24.0
00:03:39.160  [193/707] Linking target lib/librte_pci.so.24.0
00:03:39.160  [194/707] Linking target lib/librte_timer.so.24.0
00:03:39.160  [195/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:03:39.160  [196/707] Linking target lib/librte_cfgfile.so.24.0
00:03:39.160  [197/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:03:39.160  [198/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:03:39.160  [199/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:03:39.160  [200/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:03:39.160  [201/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:03:39.160  [202/707] Linking target lib/librte_rcu.so.24.0
00:03:39.160  [203/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o
00:03:39.160  [204/707] Linking target lib/librte_mempool.so.24.0
00:03:39.160  [205/707] Linking static target lib/librte_bpf.a
00:03:39.419  [206/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:03:39.419  [207/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:03:39.419  [208/707] Linking target lib/librte_mbuf.so.24.0
00:03:39.419  [209/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:03:39.419  [210/707] Linking static target lib/librte_compressdev.a
00:03:39.419  [211/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:39.419  [212/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:03:39.678  [213/707] Linking target lib/librte_net.so.24.0
00:03:39.678  [214/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o
00:03:39.678  [215/707] Linking static target lib/librte_acl.a
00:03:39.678  [216/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:03:39.678  [217/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:03:39.678  [218/707] Linking target lib/librte_bbdev.so.24.0
00:03:39.678  [219/707] Linking target lib/librte_cmdline.so.24.0
00:03:39.678  [220/707] Linking target lib/librte_hash.so.24.0
00:03:39.678  [221/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:03:39.937  [222/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o
00:03:39.937  [223/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:03:39.937  [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o
00:03:39.937  [225/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:39.937  [226/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output)
00:03:39.937  [227/707] Linking target lib/librte_compressdev.so.24.0
00:03:39.937  [228/707] Linking target lib/librte_acl.so.24.0
00:03:39.937  [229/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o
00:03:39.937  [230/707] Linking static target lib/librte_distributor.a
00:03:40.197  [231/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols
00:03:40.197  [232/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:03:40.197  [233/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output)
00:03:40.197  [234/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o
00:03:40.456  [235/707] Linking target lib/librte_distributor.so.24.0
00:03:40.456  [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:03:40.456  [237/707] Linking static target lib/librte_dmadev.a
00:03:40.715  [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o
00:03:40.715  [239/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o
00:03:40.974  [240/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:40.974  [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o
00:03:40.974  [242/707] Linking target lib/librte_dmadev.so.24.0
00:03:40.974  [243/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:03:41.234  [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o
00:03:41.234  [245/707] Linking static target lib/librte_efd.a
00:03:41.234  [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o
00:03:41.494  [247/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:03:41.494  [248/707] Linking static target lib/librte_cryptodev.a
00:03:41.494  [249/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output)
00:03:41.494  [250/707] Linking target lib/librte_efd.so.24.0
00:03:41.494  [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o
00:03:41.494  [252/707] Linking static target lib/librte_dispatcher.a
00:03:41.494  [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o
00:03:41.753  [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o
00:03:41.753  [255/707] Linking static target lib/librte_gpudev.a
00:03:41.753  [256/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output)
00:03:41.753  [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o
00:03:42.012  [258/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o
00:03:42.012  [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o
00:03:42.012  [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o
00:03:42.271  [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o
00:03:42.530  [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o
00:03:42.530  [263/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o
00:03:42.530  [264/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:42.530  [265/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o
00:03:42.530  [266/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:42.530  [267/707] Linking target lib/librte_cryptodev.so.24.0
00:03:42.530  [268/707] Linking target lib/librte_gpudev.so.24.0
00:03:42.530  [269/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o
00:03:42.530  [270/707] Linking static target lib/librte_gro.a
00:03:42.530  [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o
00:03:42.788  [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:03:42.788  [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o
00:03:42.788  [274/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o
00:03:42.788  [275/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:42.788  [276/707] Linking static target lib/librte_eventdev.a
00:03:42.788  [277/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output)
00:03:42.788  [278/707] Linking target lib/librte_ethdev.so.24.0
00:03:42.788  [279/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o
00:03:42.788  [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o
00:03:43.046  [281/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o
00:03:43.046  [282/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:03:43.046  [283/707] Linking static target lib/librte_gso.a
00:03:43.046  [284/707] Linking target lib/librte_metrics.so.24.0
00:03:43.046  [285/707] Linking target lib/librte_bpf.so.24.0
00:03:43.046  [286/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o
00:03:43.046  [287/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols
00:03:43.046  [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o
00:03:43.046  [289/707] Linking target lib/librte_gro.so.24.0
00:03:43.046  [290/707] Linking target lib/librte_bitratestats.so.24.0
00:03:43.046  [291/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols
00:03:43.305  [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o
00:03:43.305  [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o
00:03:43.305  [294/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o
00:03:43.305  [295/707] Linking static target lib/librte_jobstats.a
00:03:43.305  [296/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output)
00:03:43.305  [297/707] Linking target lib/librte_gso.so.24.0
00:03:43.305  [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o
00:03:43.563  [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o
00:03:43.563  [300/707] Linking static target lib/librte_ip_frag.a
00:03:43.563  [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:43.563  [302/707] Linking target lib/librte_jobstats.so.24.0
00:03:43.563  [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o
00:03:43.563  [304/707] Linking static target lib/librte_latencystats.a
00:03:43.822  [305/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o
00:03:43.822  [306/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output)
00:03:43.822  [307/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o
00:03:43.822  [308/707] Linking target lib/librte_ip_frag.so.24.0
00:03:43.822  [309/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o
00:03:43.822  [310/707] Linking static target lib/member/libsketch_avx512_tmp.a
00:03:43.822  [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:03:43.822  [312/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:43.822  [313/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols
00:03:43.822  [314/707] Linking target lib/librte_latencystats.so.24.0
00:03:44.082  [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:03:44.082  [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:03:44.082  [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o
00:03:44.082  [318/707] Linking static target lib/librte_lpm.a
00:03:44.340  [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:03:44.340  [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o
00:03:44.340  [321/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output)
00:03:44.340  [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o
00:03:44.340  [323/707] Linking target lib/librte_lpm.so.24.0
00:03:44.340  [324/707] Linking static target lib/librte_pcapng.a
00:03:44.340  [325/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:03:44.340  [326/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:03:44.600  [327/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:03:44.600  [328/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o
00:03:44.600  [329/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols
00:03:44.600  [330/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:03:44.600  [331/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output)
00:03:44.600  [332/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:44.600  [333/707] Linking target lib/librte_pcapng.so.24.0
00:03:44.600  [334/707] Linking target lib/librte_eventdev.so.24.0
00:03:44.859  [335/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:03:44.859  [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols
00:03:44.859  [337/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols
00:03:44.859  [338/707] Linking target lib/librte_dispatcher.so.24.0
00:03:44.859  [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:03:44.859  [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o
00:03:45.117  [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:03:45.117  [342/707] Linking static target lib/librte_power.a
00:03:45.117  [343/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o
00:03:45.117  [344/707] Linking static target lib/librte_regexdev.a
00:03:45.117  [345/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o
00:03:45.117  [346/707] Linking static target lib/librte_rawdev.a
00:03:45.117  [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o
00:03:45.117  [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o
00:03:45.376  [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o
00:03:45.376  [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o
00:03:45.376  [351/707] Linking static target lib/librte_mldev.a
00:03:45.376  [352/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o
00:03:45.376  [353/707] Linking static target lib/librte_member.a
00:03:45.376  [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o
00:03:45.635  [355/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o
00:03:45.635  [356/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:03:45.635  [357/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:45.635  [358/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o
00:03:45.635  [359/707] Linking target lib/librte_power.so.24.0
00:03:45.635  [360/707] Linking target lib/librte_rawdev.so.24.0
00:03:45.635  [361/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o
00:03:45.635  [362/707] Linking static target lib/librte_rib.a
00:03:45.635  [363/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output)
00:03:45.635  [364/707] Linking target lib/librte_member.so.24.0
00:03:45.894  [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:45.894  [366/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:03:45.894  [367/707] Linking static target lib/librte_reorder.a
00:03:45.894  [368/707] Linking target lib/librte_regexdev.so.24.0
00:03:45.894  [369/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:03:45.894  [370/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o
00:03:45.894  [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o
00:03:45.894  [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o
00:03:46.153  [373/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:03:46.153  [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o
00:03:46.153  [375/707] Linking static target lib/librte_stack.a
00:03:46.153  [376/707] Linking target lib/librte_reorder.so.24.0
00:03:46.153  [377/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output)
00:03:46.153  [378/707] Linking target lib/librte_rib.so.24.0
00:03:46.153  [379/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:03:46.153  [380/707] Linking static target lib/librte_security.a
00:03:46.153  [381/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols
00:03:46.412  [382/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output)
00:03:46.412  [383/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols
00:03:46.412  [384/707] Linking target lib/librte_stack.so.24.0
00:03:46.412  [385/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:03:46.412  [386/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:03:46.412  [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:46.671  [388/707] Linking target lib/librte_mldev.so.24.0
00:03:46.671  [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:03:46.671  [390/707] Linking target lib/librte_security.so.24.0
00:03:46.671  [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:03:46.671  [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols
00:03:46.671  [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o
00:03:46.671  [394/707] Linking static target lib/librte_sched.a
00:03:46.931  [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:03:46.931  [396/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output)
00:03:47.190  [397/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:03:47.190  [398/707] Linking target lib/librte_sched.so.24.0
00:03:47.190  [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:03:47.190  [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols
00:03:47.449  [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
00:03:47.449  [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
00:03:47.708  [403/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o
00:03:47.708  [404/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o
00:03:47.708  [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o
00:03:47.708  [406/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:03:47.968  [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o
00:03:47.968  [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o
00:03:47.968  [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o
00:03:47.968  [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
00:03:48.228  [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
00:03:48.228  [412/707] Linking static target lib/librte_ipsec.a
00:03:48.228  [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o
00:03:48.228  [414/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o
00:03:48.228  [415/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o
00:03:48.487  [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output)
00:03:48.487  [417/707] Linking target lib/librte_ipsec.so.24.0
00:03:48.487  [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols
00:03:48.487  [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o
00:03:48.746  [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o
00:03:48.746  [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o
00:03:49.005  [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o
00:03:49.005  [423/707] Linking static target lib/librte_fib.a
00:03:49.005  [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o
00:03:49.005  [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o
00:03:49.005  [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o
00:03:49.264  [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o
00:03:49.264  [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o
00:03:49.264  [429/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output)
00:03:49.264  [430/707] Linking target lib/librte_fib.so.24.0
00:03:49.264  [431/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o
00:03:49.264  [432/707] Linking static target lib/librte_pdcp.a
00:03:49.523  [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output)
00:03:49.523  [434/707] Linking target lib/librte_pdcp.so.24.0
00:03:49.782  [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o
00:03:49.782  [436/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o
00:03:49.782  [437/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o
00:03:49.782  [438/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o
00:03:49.782  [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o
00:03:50.041  [440/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o
00:03:50.041  [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o
00:03:50.300  [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o
00:03:50.300  [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o
00:03:50.300  [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o
00:03:50.559  [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o
00:03:50.560  [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o
00:03:50.560  [447/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o
00:03:50.560  [448/707] Linking static target lib/librte_port.a
00:03:50.560  [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o
00:03:50.560  [450/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o
00:03:50.825  [451/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o
00:03:50.825  [452/707] Linking static target lib/librte_pdump.a
00:03:50.825  [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o
00:03:51.093  [454/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output)
00:03:51.093  [455/707] Linking target lib/librte_pdump.so.24.0
00:03:51.093  [456/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output)
00:03:51.093  [457/707] Linking target lib/librte_port.so.24.0
00:03:51.352  [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols
00:03:51.352  [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o
00:03:51.352  [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o
00:03:51.352  [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o
00:03:51.352  [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o
00:03:51.610  [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o
00:03:51.610  [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o
00:03:51.610  [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o
00:03:51.610  [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o
00:03:51.610  [467/707] Linking static target lib/librte_table.a
00:03:51.869  [468/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:51.869  [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o
00:03:52.127  [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o
00:03:52.127  [471/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output)
00:03:52.386  [472/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o
00:03:52.386  [473/707] Linking target lib/librte_table.so.24.0
00:03:52.386  [474/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols
00:03:52.386  [475/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o
00:03:52.386  [476/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o
00:03:52.645  [477/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o
00:03:52.645  [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o
00:03:52.903  [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o
00:03:52.904  [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o
00:03:52.904  [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o
00:03:53.162  [482/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o
00:03:53.162  [483/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o
00:03:53.162  [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o
00:03:53.420  [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o
00:03:53.420  [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o
00:03:53.678  [487/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o
00:03:53.678  [488/707] Linking static target lib/librte_graph.a
00:03:53.678  [489/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o
00:03:53.678  [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o
00:03:53.937  [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o
00:03:54.195  [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output)
00:03:54.195  [493/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o
00:03:54.195  [494/707] Linking target lib/librte_graph.so.24.0
00:03:54.195  [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols
00:03:54.195  [496/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o
00:03:54.195  [497/707] Compiling C object lib/librte_node.a.p/node_null.c.o
00:03:54.453  [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o
00:03:54.454  [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o
00:03:54.454  [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o
00:03:54.454  [501/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o
00:03:54.712  [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:03:54.712  [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o
00:03:54.712  [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o
00:03:54.971  [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:03:54.971  [506/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o
00:03:54.971  [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:03:54.971  [508/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o
00:03:54.971  [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:03:55.228  [510/707] Linking static target lib/librte_node.a
00:03:55.229  [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:03:55.229  [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:03:55.486  [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output)
00:03:55.486  [514/707] Linking target lib/librte_node.so.24.0
00:03:55.486  [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:03:55.486  [516/707] Linking static target drivers/libtmp_rte_bus_vdev.a
00:03:55.486  [517/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:03:55.486  [518/707] Linking static target drivers/libtmp_rte_bus_pci.a
00:03:55.744  [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:03:55.744  [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:55.744  [521/707] Linking static target drivers/librte_bus_vdev.a
00:03:55.744  [522/707] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:03:55.744  [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:55.744  [524/707] Linking static target drivers/librte_bus_pci.a
00:03:55.744  [525/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o
00:03:56.002  [526/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:56.002  [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o
00:03:56.002  [528/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:56.002  [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o
00:03:56.002  [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:56.002  [531/707] Linking target drivers/librte_bus_vdev.so.24.0
00:03:56.002  [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:03:56.002  [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a
00:03:56.002  [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols
00:03:56.261  [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:56.261  [536/707] Linking target drivers/librte_bus_pci.so.24.0
00:03:56.261  [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:03:56.261  [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:56.261  [539/707] Linking static target drivers/librte_mempool_ring.a
00:03:56.261  [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:56.261  [541/707] Linking target drivers/librte_mempool_ring.so.24.0
00:03:56.261  [542/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols
00:03:56.520  [543/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o
00:03:56.779  [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o
00:03:57.037  [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o
00:03:57.603  [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o
00:03:57.603  [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a
00:03:57.603  [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o
00:03:58.169  [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o
00:03:58.169  [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o
00:03:58.169  [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a
00:03:58.169  [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o
00:03:58.169  [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a
00:03:58.427  [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o
00:03:58.427  [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o
00:03:58.993  [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o
00:03:58.993  [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o
00:03:58.993  [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o
00:03:58.993  [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o
00:03:59.250  [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o
00:03:59.507  [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o
00:03:59.507  [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o
00:03:59.765  [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o
00:04:00.022  [564/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o
00:04:00.022  [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o
00:04:00.022  [566/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o
00:04:00.022  [567/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o
00:04:00.022  [568/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o
00:04:00.280  [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o
00:04:00.280  [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o
00:04:00.538  [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o
00:04:00.538  [572/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o
00:04:00.538  [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o
00:04:00.538  [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o
00:04:00.796  [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o
00:04:00.796  [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o
00:04:01.055  [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o
00:04:01.055  [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o
00:04:01.055  [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o
00:04:01.055  [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o
00:04:01.055  [581/707] Linking static target drivers/libtmp_rte_net_i40e.a
00:04:01.312  [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o
00:04:01.570  [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o
00:04:01.570  [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command
00:04:01.570  [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:04:01.570  [586/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o
00:04:01.570  [587/707] Linking static target drivers/librte_net_i40e.a
00:04:01.570  [588/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o
00:04:01.570  [589/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:04:02.135  [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o
00:04:02.135  [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o
00:04:02.135  [592/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output)
00:04:02.394  [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o
00:04:02.394  [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o
00:04:02.394  [595/707] Linking target drivers/librte_net_i40e.so.24.0
00:04:02.394  [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o
00:04:02.651  [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o
00:04:02.910  [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o
00:04:02.910  [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o
00:04:03.168  [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o
00:04:03.168  [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o
00:04:03.168  [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o
00:04:03.168  [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o
00:04:03.427  [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o
00:04:03.427  [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o
00:04:03.427  [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o
00:04:03.685  [607/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o
00:04:03.686  [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o
00:04:03.686  [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o
00:04:03.686  [610/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o
00:04:03.686  [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o
00:04:03.944  [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o
00:04:03.944  [613/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o
00:04:04.508  [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o
00:04:04.508  [615/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o
00:04:04.508  [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o
00:04:05.075  [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o
00:04:05.333  [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o
00:04:05.333  [619/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:04:05.333  [620/707] Linking static target lib/librte_vhost.a
00:04:05.590  [621/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o
00:04:05.590  [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o
00:04:05.590  [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o
00:04:05.590  [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o
00:04:05.590  [625/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o
00:04:05.848  [626/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o
00:04:05.848  [627/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o
00:04:06.106  [628/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o
00:04:06.106  [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o
00:04:06.106  [630/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o
00:04:06.106  [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o
00:04:06.106  [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o
00:04:06.363  [633/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o
00:04:06.363  [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o
00:04:06.622  [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o
00:04:06.622  [636/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:04:06.622  [637/707] Linking target lib/librte_vhost.so.24.0
00:04:06.622  [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o
00:04:06.622  [639/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o
00:04:06.880  [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o
00:04:06.880  [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o
00:04:06.880  [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o
00:04:07.139  [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o
00:04:07.139  [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o
00:04:07.139  [645/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o
00:04:07.139  [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o
00:04:07.397  [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o
00:04:07.397  [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o
00:04:07.656  [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o
00:04:07.656  [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o
00:04:07.915  [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o
00:04:07.915  [652/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o
00:04:07.915  [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o
00:04:07.915  [654/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o
00:04:08.172  [655/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o
00:04:08.172  [656/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o
00:04:08.172  [657/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
00:04:08.172  [658/707] Linking static target lib/librte_pipeline.a
00:04:08.430  [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o
00:04:08.430  [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o
00:04:08.688  [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o
00:04:08.947  [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o
00:04:08.947  [663/707] Linking target app/dpdk-dumpcap
00:04:08.947  [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o
00:04:08.947  [665/707] Linking target app/dpdk-graph
00:04:09.206  [666/707] Linking target app/dpdk-pdump
00:04:09.206  [667/707] Linking target app/dpdk-proc-info
00:04:09.467  [668/707] Linking target app/dpdk-test-acl
00:04:09.467  [669/707] Linking target app/dpdk-test-bbdev
00:04:09.467  [670/707] Linking target app/dpdk-test-cmdline
00:04:09.467  [671/707] Linking target app/dpdk-test-compress-perf
00:04:09.726  [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o
00:04:09.726  [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o
00:04:09.726  [674/707] Linking target app/dpdk-test-crypto-perf
00:04:09.726  [675/707] Linking target app/dpdk-test-dma-perf
00:04:10.004  [676/707] Linking target app/dpdk-test-fib
00:04:10.004  [677/707] Linking target app/dpdk-test-eventdev
00:04:10.004  [678/707] Linking target app/dpdk-test-gpudev
00:04:10.004  [679/707] Linking target app/dpdk-test-flow-perf
00:04:10.261  [680/707] Linking target app/dpdk-test-pipeline
00:04:10.261  [681/707] Linking target app/dpdk-test-mldev
00:04:10.261  [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o
00:04:10.261  [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o
00:04:10.520  [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o
00:04:10.520  [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o
00:04:10.779  [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o
00:04:10.779  [687/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output)
00:04:10.779  [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o
00:04:10.779  [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o
00:04:10.779  [690/707] Linking target lib/librte_pipeline.so.24.0
00:04:11.038  [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o
00:04:11.038  [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o
00:04:11.296  [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o
00:04:11.296  [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o
00:04:11.556  [695/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o
00:04:11.556  [696/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o
00:04:11.556  [697/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o
00:04:11.556  [698/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o
00:04:11.556  [699/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o
00:04:11.815  [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o
00:04:12.074  [701/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o
00:04:12.074  [702/707] Linking target app/dpdk-test-sad
00:04:12.074  [703/707] Linking target app/dpdk-test-regex
00:04:12.333  [704/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o
00:04:12.333  [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o
00:04:12.591  [706/707] Linking target app/dpdk-test-security-perf
00:04:12.591  [707/707] Linking target app/dpdk-testpmd
00:04:12.591    11:25:38 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s
00:04:12.591   11:25:38 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]]
00:04:12.591   11:25:38 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install
00:04:12.849  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:04:12.849  [0/1] Installing files.
00:04:13.111  Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:04:13.111  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.112  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:13.113  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.114  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:04:13.115  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:04:13.115  Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.115  Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.116  Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.376  Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.376  Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.376  Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.376  Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:13.376  Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.376  Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:13.376  Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.376  Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:13.376  Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:04:13.376  Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0
00:04:13.376  Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.376  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.377  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.378  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.379  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.676  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.677  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:04:13.677  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:04:13.677  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:04:13.677  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:04:13.677  Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24
00:04:13.677  Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so
00:04:13.677  Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24
00:04:13.677  Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so
00:04:13.677  Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24
00:04:13.677  Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so
00:04:13.677  Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24
00:04:13.677  Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so
00:04:13.677  Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24
00:04:13.677  Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so
00:04:13.677  Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24
00:04:13.677  Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so
00:04:13.677  Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24
00:04:13.677  Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so
00:04:13.677  Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24
00:04:13.677  Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so
00:04:13.677  Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24
00:04:13.677  Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so
00:04:13.677  Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24
00:04:13.677  Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so
00:04:13.677  Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24
00:04:13.677  Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so
00:04:13.677  Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24
00:04:13.677  Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so
00:04:13.677  Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24
00:04:13.677  Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so
00:04:13.677  Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24
00:04:13.677  Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so
00:04:13.677  Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24
00:04:13.677  Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so
00:04:13.677  Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24
00:04:13.677  Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so
00:04:13.677  Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24
00:04:13.677  Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so
00:04:13.677  Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24
00:04:13.677  Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so
00:04:13.677  Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24
00:04:13.677  Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so
00:04:13.677  Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24
00:04:13.677  Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so
00:04:13.677  Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24
00:04:13.677  Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so
00:04:13.677  Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24
00:04:13.677  Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so
00:04:13.677  Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24
00:04:13.677  Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so
00:04:13.677  Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24
00:04:13.677  Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so
00:04:13.677  Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24
00:04:13.677  Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so
00:04:13.677  Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24
00:04:13.677  Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so
00:04:13.677  Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24
00:04:13.677  Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so
00:04:13.677  Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24
00:04:13.677  Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so
00:04:13.677  Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24
00:04:13.677  Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so
00:04:13.677  Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24
00:04:13.677  Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so
00:04:13.677  Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24
00:04:13.677  Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so
00:04:13.677  Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24
00:04:13.677  Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so
00:04:13.677  Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24
00:04:13.677  Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so
00:04:13.677  Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24
00:04:13.677  Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so
00:04:13.677  Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24
00:04:13.677  Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so
00:04:13.677  Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24
00:04:13.677  Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so
00:04:13.677  Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24
00:04:13.677  Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so
00:04:13.677  Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24
00:04:13.677  Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so
00:04:13.677  Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24
00:04:13.677  Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so
00:04:13.677  Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24
00:04:13.677  Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so
00:04:13.677  Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24
00:04:13.677  Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so
00:04:13.677  Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24
00:04:13.677  Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so
00:04:13.677  Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24
00:04:13.677  Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so
00:04:13.677  Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24
00:04:13.677  Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so
00:04:13.677  Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24
00:04:13.677  Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so
00:04:13.677  './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so'
00:04:13.677  './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24'
00:04:13.677  './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0'
00:04:13.677  './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so'
00:04:13.677  './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24'
00:04:13.677  './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0'
00:04:13.677  './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so'
00:04:13.677  './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24'
00:04:13.677  './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0'
00:04:13.677  './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so'
00:04:13.677  './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24'
00:04:13.677  './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0'
00:04:13.678  Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24
00:04:13.678  Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so
00:04:13.678  Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24
00:04:13.678  Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so
00:04:13.678  Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24
00:04:13.678  Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so
00:04:13.678  Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24
00:04:13.678  Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so
00:04:13.678  Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24
00:04:13.678  Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so
00:04:13.678  Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24
00:04:13.678  Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so
00:04:13.678  Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24
00:04:13.678  Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so
00:04:13.678  Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24
00:04:13.678  Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so
00:04:13.678  Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24
00:04:13.678  Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so
00:04:13.678  Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24
00:04:13.678  Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so
00:04:13.678  Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24
00:04:13.678  Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so
00:04:13.678  Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24
00:04:13.678  Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:04:13.678  Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24
00:04:13.678  Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:04:13.678  Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24
00:04:13.678  Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:04:13.678  Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24
00:04:13.678  Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:04:13.678  Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0'
00:04:13.678   11:25:39 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat
00:04:13.678   11:25:39 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk
00:04:13.678  
00:04:13.678  real	0m51.495s
00:04:13.678  user	5m57.901s
00:04:13.678  sys	0m59.697s
00:04:13.678   11:25:39 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable
00:04:13.678   11:25:39 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x
00:04:13.678  ************************************
00:04:13.678  END TEST build_native_dpdk
00:04:13.678  ************************************
00:04:13.678   11:25:39  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:04:13.678   11:25:39  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:04:13.678   11:25:39  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:04:13.678   11:25:39  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:04:13.678   11:25:39  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:04:13.678   11:25:39  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:04:13.678   11:25:39  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:04:13.678   11:25:39  -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared
00:04:13.939  Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs...
00:04:14.199  DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib
00:04:14.199  DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include
00:04:14.199  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:04:14.457  Using 'verbs' RDMA provider
00:04:31.295  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:04:43.539  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:04:44.106  Creating mk/config.mk...done.
00:04:44.106  Creating mk/cc.flags.mk...done.
00:04:44.106  Type 'make' to build.
00:04:44.106   11:26:09  -- spdk/autobuild.sh@70 -- $ run_test make make -j10
00:04:44.106   11:26:09  -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']'
00:04:44.106   11:26:09  -- common/autotest_common.sh@1107 -- $ xtrace_disable
00:04:44.106   11:26:09  -- common/autotest_common.sh@10 -- $ set +x
00:04:44.106  ************************************
00:04:44.106  START TEST make
00:04:44.106  ************************************
00:04:44.106   11:26:09 make -- common/autotest_common.sh@1125 -- $ make -j10
00:04:44.411  make[1]: Nothing to be done for 'all'.
00:05:40.648    CC lib/ut/ut.o
00:05:40.648    CC lib/log/log.o
00:05:40.648    CC lib/log/log_flags.o
00:05:40.648    CC lib/ut_mock/mock.o
00:05:40.648    CC lib/log/log_deprecated.o
00:05:40.648    LIB libspdk_ut.a
00:05:40.648    LIB libspdk_ut_mock.a
00:05:40.648    LIB libspdk_log.a
00:05:40.648    SO libspdk_ut.so.2.0
00:05:40.648    SO libspdk_ut_mock.so.6.0
00:05:40.648    SO libspdk_log.so.7.0
00:05:40.648    SYMLINK libspdk_ut.so
00:05:40.648    SYMLINK libspdk_ut_mock.so
00:05:40.648    SYMLINK libspdk_log.so
00:05:40.648    CC lib/dma/dma.o
00:05:40.648    CC lib/ioat/ioat.o
00:05:40.649    CC lib/util/bit_array.o
00:05:40.649    CC lib/util/crc16.o
00:05:40.649    CC lib/util/base64.o
00:05:40.649    CC lib/util/crc32c.o
00:05:40.649    CC lib/util/cpuset.o
00:05:40.649    CC lib/util/crc32.o
00:05:40.649    CXX lib/trace_parser/trace.o
00:05:40.649    CC lib/vfio_user/host/vfio_user_pci.o
00:05:40.649    CC lib/util/crc32_ieee.o
00:05:40.649    CC lib/util/crc64.o
00:05:40.649    CC lib/vfio_user/host/vfio_user.o
00:05:40.649    CC lib/util/dif.o
00:05:40.649    CC lib/util/fd.o
00:05:40.649    LIB libspdk_dma.a
00:05:40.649    CC lib/util/fd_group.o
00:05:40.649    SO libspdk_dma.so.5.0
00:05:40.649    CC lib/util/file.o
00:05:40.649    CC lib/util/hexlify.o
00:05:40.649    SYMLINK libspdk_dma.so
00:05:40.649    CC lib/util/iov.o
00:05:40.649    LIB libspdk_ioat.a
00:05:40.649    SO libspdk_ioat.so.7.0
00:05:40.649    CC lib/util/math.o
00:05:40.649    CC lib/util/net.o
00:05:40.649    LIB libspdk_vfio_user.a
00:05:40.649    SO libspdk_vfio_user.so.5.0
00:05:40.649    SYMLINK libspdk_ioat.so
00:05:40.649    CC lib/util/pipe.o
00:05:40.649    CC lib/util/strerror_tls.o
00:05:40.649    CC lib/util/string.o
00:05:40.649    SYMLINK libspdk_vfio_user.so
00:05:40.649    CC lib/util/uuid.o
00:05:40.649    CC lib/util/xor.o
00:05:40.649    CC lib/util/zipf.o
00:05:40.649    CC lib/util/md5.o
00:05:40.649    LIB libspdk_util.a
00:05:40.649    SO libspdk_util.so.10.0
00:05:40.649    LIB libspdk_trace_parser.a
00:05:40.649    SO libspdk_trace_parser.so.6.0
00:05:40.649    SYMLINK libspdk_util.so
00:05:40.649    SYMLINK libspdk_trace_parser.so
00:05:40.649    CC lib/vmd/vmd.o
00:05:40.649    CC lib/vmd/led.o
00:05:40.649    CC lib/idxd/idxd.o
00:05:40.649    CC lib/idxd/idxd_user.o
00:05:40.649    CC lib/idxd/idxd_kernel.o
00:05:40.649    CC lib/rdma_utils/rdma_utils.o
00:05:40.649    CC lib/env_dpdk/env.o
00:05:40.649    CC lib/rdma_provider/common.o
00:05:40.649    CC lib/json/json_parse.o
00:05:40.649    CC lib/conf/conf.o
00:05:40.649    CC lib/rdma_provider/rdma_provider_verbs.o
00:05:40.649    CC lib/json/json_util.o
00:05:40.649    CC lib/env_dpdk/memory.o
00:05:40.649    CC lib/env_dpdk/pci.o
00:05:40.649    LIB libspdk_conf.a
00:05:40.649    SO libspdk_conf.so.6.0
00:05:40.649    CC lib/json/json_write.o
00:05:40.649    SYMLINK libspdk_conf.so
00:05:40.649    CC lib/env_dpdk/init.o
00:05:40.649    LIB libspdk_rdma_utils.a
00:05:40.649    LIB libspdk_rdma_provider.a
00:05:40.649    SO libspdk_rdma_utils.so.1.0
00:05:40.649    SO libspdk_rdma_provider.so.6.0
00:05:40.649    SYMLINK libspdk_rdma_utils.so
00:05:40.649    SYMLINK libspdk_rdma_provider.so
00:05:40.649    CC lib/env_dpdk/threads.o
00:05:40.649    CC lib/env_dpdk/pci_ioat.o
00:05:40.649    CC lib/env_dpdk/pci_virtio.o
00:05:40.649    CC lib/env_dpdk/pci_vmd.o
00:05:40.649    CC lib/env_dpdk/pci_idxd.o
00:05:40.649    CC lib/env_dpdk/pci_event.o
00:05:40.649    CC lib/env_dpdk/sigbus_handler.o
00:05:40.649    LIB libspdk_json.a
00:05:40.649    CC lib/env_dpdk/pci_dpdk.o
00:05:40.649    LIB libspdk_idxd.a
00:05:40.649    SO libspdk_json.so.6.0
00:05:40.649    CC lib/env_dpdk/pci_dpdk_2207.o
00:05:40.649    CC lib/env_dpdk/pci_dpdk_2211.o
00:05:40.649    SO libspdk_idxd.so.12.1
00:05:40.649    SYMLINK libspdk_json.so
00:05:40.649    SYMLINK libspdk_idxd.so
00:05:40.649    LIB libspdk_vmd.a
00:05:40.649    SO libspdk_vmd.so.6.0
00:05:40.649    SYMLINK libspdk_vmd.so
00:05:40.649    CC lib/jsonrpc/jsonrpc_server.o
00:05:40.649    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:05:40.649    CC lib/jsonrpc/jsonrpc_client.o
00:05:40.649    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:05:40.649    LIB libspdk_jsonrpc.a
00:05:40.649    SO libspdk_jsonrpc.so.6.0
00:05:40.649    SYMLINK libspdk_jsonrpc.so
00:05:40.649    LIB libspdk_env_dpdk.a
00:05:40.649    SO libspdk_env_dpdk.so.15.0
00:05:40.649    CC lib/rpc/rpc.o
00:05:40.649    SYMLINK libspdk_env_dpdk.so
00:05:40.649    LIB libspdk_rpc.a
00:05:40.649    SO libspdk_rpc.so.6.0
00:05:40.649    SYMLINK libspdk_rpc.so
00:05:40.649    CC lib/keyring/keyring.o
00:05:40.649    CC lib/notify/notify.o
00:05:40.649    CC lib/notify/notify_rpc.o
00:05:40.649    CC lib/keyring/keyring_rpc.o
00:05:40.649    CC lib/trace/trace.o
00:05:40.649    CC lib/trace/trace_flags.o
00:05:40.649    CC lib/trace/trace_rpc.o
00:05:40.649    LIB libspdk_notify.a
00:05:40.649    LIB libspdk_keyring.a
00:05:40.649    SO libspdk_notify.so.6.0
00:05:40.649    LIB libspdk_trace.a
00:05:40.649    SO libspdk_keyring.so.2.0
00:05:40.649    SYMLINK libspdk_notify.so
00:05:40.649    SO libspdk_trace.so.11.0
00:05:40.649    SYMLINK libspdk_keyring.so
00:05:40.649    SYMLINK libspdk_trace.so
00:05:40.649    CC lib/sock/sock.o
00:05:40.649    CC lib/sock/sock_rpc.o
00:05:40.649    CC lib/thread/thread.o
00:05:40.649    CC lib/thread/iobuf.o
00:05:40.649    LIB libspdk_sock.a
00:05:40.649    SO libspdk_sock.so.10.0
00:05:40.649    SYMLINK libspdk_sock.so
00:05:40.649    CC lib/nvme/nvme_fabric.o
00:05:40.649    CC lib/nvme/nvme_ns_cmd.o
00:05:40.649    CC lib/nvme/nvme_ctrlr_cmd.o
00:05:40.649    CC lib/nvme/nvme_ctrlr.o
00:05:40.649    CC lib/nvme/nvme_ns.o
00:05:40.649    CC lib/nvme/nvme.o
00:05:40.649    CC lib/nvme/nvme_pcie_common.o
00:05:40.649    CC lib/nvme/nvme_pcie.o
00:05:40.649    CC lib/nvme/nvme_qpair.o
00:05:40.649    LIB libspdk_thread.a
00:05:40.649    SO libspdk_thread.so.10.1
00:05:40.649    CC lib/nvme/nvme_quirks.o
00:05:40.649    CC lib/nvme/nvme_transport.o
00:05:40.649    CC lib/nvme/nvme_discovery.o
00:05:40.649    SYMLINK libspdk_thread.so
00:05:40.649    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:05:40.649    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:05:40.649    CC lib/nvme/nvme_tcp.o
00:05:40.649    CC lib/accel/accel.o
00:05:40.649    CC lib/nvme/nvme_opal.o
00:05:40.649    CC lib/blob/blobstore.o
00:05:40.649    CC lib/blob/request.o
00:05:40.649    CC lib/blob/zeroes.o
00:05:40.649    CC lib/blob/blob_bs_dev.o
00:05:40.649    CC lib/init/json_config.o
00:05:40.649    CC lib/init/subsystem.o
00:05:40.649    CC lib/init/subsystem_rpc.o
00:05:40.649    CC lib/init/rpc.o
00:05:40.649    CC lib/accel/accel_rpc.o
00:05:40.649    CC lib/virtio/virtio.o
00:05:40.909    CC lib/accel/accel_sw.o
00:05:40.909    CC lib/nvme/nvme_io_msg.o
00:05:40.909    CC lib/nvme/nvme_poll_group.o
00:05:40.909    LIB libspdk_init.a
00:05:40.909    SO libspdk_init.so.6.0
00:05:40.909    CC lib/nvme/nvme_zns.o
00:05:40.909    SYMLINK libspdk_init.so
00:05:40.909    CC lib/nvme/nvme_stubs.o
00:05:41.168    CC lib/virtio/virtio_vhost_user.o
00:05:41.168    CC lib/virtio/virtio_vfio_user.o
00:05:41.168    LIB libspdk_accel.a
00:05:41.427    SO libspdk_accel.so.16.0
00:05:41.427    CC lib/nvme/nvme_auth.o
00:05:41.427    SYMLINK libspdk_accel.so
00:05:41.427    CC lib/nvme/nvme_cuse.o
00:05:41.427    CC lib/virtio/virtio_pci.o
00:05:41.427    CC lib/nvme/nvme_rdma.o
00:05:41.687    CC lib/fsdev/fsdev.o
00:05:41.687    CC lib/fsdev/fsdev_io.o
00:05:41.687    CC lib/bdev/bdev.o
00:05:41.687    CC lib/event/app.o
00:05:41.687    LIB libspdk_virtio.a
00:05:41.687    SO libspdk_virtio.so.7.0
00:05:41.687    CC lib/fsdev/fsdev_rpc.o
00:05:41.946    SYMLINK libspdk_virtio.so
00:05:41.946    CC lib/event/reactor.o
00:05:41.946    CC lib/bdev/bdev_rpc.o
00:05:41.946    CC lib/bdev/bdev_zone.o
00:05:42.206    CC lib/event/log_rpc.o
00:05:42.206    CC lib/bdev/part.o
00:05:42.206    CC lib/event/app_rpc.o
00:05:42.206    CC lib/event/scheduler_static.o
00:05:42.206    CC lib/bdev/scsi_nvme.o
00:05:42.465    LIB libspdk_fsdev.a
00:05:42.465    SO libspdk_fsdev.so.1.0
00:05:42.465    SYMLINK libspdk_fsdev.so
00:05:42.465    LIB libspdk_event.a
00:05:42.465    SO libspdk_event.so.14.0
00:05:42.725    SYMLINK libspdk_event.so
00:05:42.725    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:05:42.984    LIB libspdk_nvme.a
00:05:43.244    SO libspdk_nvme.so.14.0
00:05:43.503    SYMLINK libspdk_nvme.so
00:05:43.503    LIB libspdk_fuse_dispatcher.a
00:05:43.503    SO libspdk_fuse_dispatcher.so.1.0
00:05:43.503    SYMLINK libspdk_fuse_dispatcher.so
00:05:43.763    LIB libspdk_blob.a
00:05:44.022    SO libspdk_blob.so.11.0
00:05:44.022    SYMLINK libspdk_blob.so
00:05:44.592    CC lib/blobfs/blobfs.o
00:05:44.592    CC lib/blobfs/tree.o
00:05:44.592    CC lib/lvol/lvol.o
00:05:44.592    LIB libspdk_bdev.a
00:05:44.592    SO libspdk_bdev.so.16.0
00:05:44.851    SYMLINK libspdk_bdev.so
00:05:45.111    CC lib/scsi/lun.o
00:05:45.111    CC lib/scsi/dev.o
00:05:45.111    CC lib/scsi/port.o
00:05:45.111    CC lib/ftl/ftl_core.o
00:05:45.111    CC lib/nbd/nbd.o
00:05:45.111    CC lib/scsi/scsi.o
00:05:45.111    CC lib/ublk/ublk.o
00:05:45.111    CC lib/nvmf/ctrlr.o
00:05:45.111    CC lib/nvmf/ctrlr_discovery.o
00:05:45.370    CC lib/scsi/scsi_bdev.o
00:05:45.370    CC lib/ftl/ftl_init.o
00:05:45.370    CC lib/nvmf/ctrlr_bdev.o
00:05:45.370    LIB libspdk_blobfs.a
00:05:45.370    SO libspdk_blobfs.so.10.0
00:05:45.370    SYMLINK libspdk_blobfs.so
00:05:45.370    CC lib/nvmf/subsystem.o
00:05:45.370    CC lib/nvmf/nvmf.o
00:05:45.629    CC lib/ftl/ftl_layout.o
00:05:45.629    CC lib/nbd/nbd_rpc.o
00:05:45.629    LIB libspdk_lvol.a
00:05:45.629    SO libspdk_lvol.so.10.0
00:05:45.629    SYMLINK libspdk_lvol.so
00:05:45.629    LIB libspdk_nbd.a
00:05:45.629    CC lib/ftl/ftl_debug.o
00:05:45.629    CC lib/ftl/ftl_io.o
00:05:45.629    SO libspdk_nbd.so.7.0
00:05:45.888    CC lib/scsi/scsi_pr.o
00:05:45.888    CC lib/ublk/ublk_rpc.o
00:05:45.888    SYMLINK libspdk_nbd.so
00:05:45.888    CC lib/scsi/scsi_rpc.o
00:05:45.888    CC lib/scsi/task.o
00:05:45.888    CC lib/ftl/ftl_sb.o
00:05:45.888    CC lib/nvmf/nvmf_rpc.o
00:05:45.888    CC lib/nvmf/transport.o
00:05:45.888    LIB libspdk_ublk.a
00:05:46.151    SO libspdk_ublk.so.3.0
00:05:46.151    CC lib/nvmf/tcp.o
00:05:46.151    CC lib/ftl/ftl_l2p.o
00:05:46.151    SYMLINK libspdk_ublk.so
00:05:46.151    CC lib/ftl/ftl_l2p_flat.o
00:05:46.151    LIB libspdk_scsi.a
00:05:46.151    CC lib/ftl/ftl_nv_cache.o
00:05:46.151    SO libspdk_scsi.so.9.0
00:05:46.415    SYMLINK libspdk_scsi.so
00:05:46.415    CC lib/ftl/ftl_band.o
00:05:46.415    CC lib/iscsi/conn.o
00:05:46.415    CC lib/vhost/vhost.o
00:05:46.415    CC lib/nvmf/stubs.o
00:05:46.675    CC lib/ftl/ftl_band_ops.o
00:05:46.935    CC lib/ftl/ftl_writer.o
00:05:46.935    CC lib/nvmf/mdns_server.o
00:05:46.935    CC lib/nvmf/rdma.o
00:05:46.935    CC lib/ftl/ftl_rq.o
00:05:46.935    CC lib/ftl/ftl_reloc.o
00:05:47.195    CC lib/vhost/vhost_rpc.o
00:05:47.195    CC lib/iscsi/init_grp.o
00:05:47.195    CC lib/iscsi/iscsi.o
00:05:47.195    CC lib/iscsi/param.o
00:05:47.195    CC lib/nvmf/auth.o
00:05:47.195    CC lib/vhost/vhost_scsi.o
00:05:47.195    CC lib/iscsi/portal_grp.o
00:05:47.454    CC lib/ftl/ftl_l2p_cache.o
00:05:47.454    CC lib/ftl/ftl_p2l.o
00:05:47.714    CC lib/iscsi/tgt_node.o
00:05:47.714    CC lib/iscsi/iscsi_subsystem.o
00:05:47.714    CC lib/vhost/vhost_blk.o
00:05:47.714    CC lib/ftl/ftl_p2l_log.o
00:05:47.973    CC lib/ftl/mngt/ftl_mngt.o
00:05:47.973    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:05:47.973    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:05:47.973    CC lib/ftl/mngt/ftl_mngt_startup.o
00:05:48.232    CC lib/vhost/rte_vhost_user.o
00:05:48.232    CC lib/ftl/mngt/ftl_mngt_md.o
00:05:48.232    CC lib/ftl/mngt/ftl_mngt_misc.o
00:05:48.232    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:05:48.232    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:05:48.232    CC lib/ftl/mngt/ftl_mngt_band.o
00:05:48.232    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:05:48.491    CC lib/iscsi/iscsi_rpc.o
00:05:48.491    CC lib/iscsi/task.o
00:05:48.491    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:05:48.491    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:05:48.491    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:05:48.491    CC lib/ftl/utils/ftl_conf.o
00:05:48.750    CC lib/ftl/utils/ftl_md.o
00:05:48.750    CC lib/ftl/utils/ftl_mempool.o
00:05:48.750    CC lib/ftl/utils/ftl_bitmap.o
00:05:48.750    CC lib/ftl/utils/ftl_property.o
00:05:48.750    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:05:48.750    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:05:48.750    LIB libspdk_iscsi.a
00:05:49.009    SO libspdk_iscsi.so.8.0
00:05:49.009    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:05:49.009    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:05:49.009    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:05:49.009    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:05:49.009    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:05:49.009    SYMLINK libspdk_iscsi.so
00:05:49.009    CC lib/ftl/upgrade/ftl_sb_v3.o
00:05:49.009    CC lib/ftl/upgrade/ftl_sb_v5.o
00:05:49.009    CC lib/ftl/nvc/ftl_nvc_dev.o
00:05:49.276    LIB libspdk_vhost.a
00:05:49.276    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:05:49.276    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:05:49.276    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:05:49.276    SO libspdk_vhost.so.8.0
00:05:49.276    CC lib/ftl/base/ftl_base_dev.o
00:05:49.276    CC lib/ftl/base/ftl_base_bdev.o
00:05:49.276    CC lib/ftl/ftl_trace.o
00:05:49.276    SYMLINK libspdk_vhost.so
00:05:49.536    LIB libspdk_nvmf.a
00:05:49.536    LIB libspdk_ftl.a
00:05:49.795    SO libspdk_nvmf.so.19.0
00:05:49.795    SO libspdk_ftl.so.9.0
00:05:50.054    SYMLINK libspdk_nvmf.so
00:05:50.054    SYMLINK libspdk_ftl.so
00:05:50.622    CC module/env_dpdk/env_dpdk_rpc.o
00:05:50.622    CC module/sock/posix/posix.o
00:05:50.622    CC module/accel/ioat/accel_ioat.o
00:05:50.622    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:05:50.622    CC module/fsdev/aio/fsdev_aio.o
00:05:50.622    CC module/scheduler/dynamic/scheduler_dynamic.o
00:05:50.622    CC module/blob/bdev/blob_bdev.o
00:05:50.622    CC module/keyring/file/keyring.o
00:05:50.622    CC module/scheduler/gscheduler/gscheduler.o
00:05:50.622    CC module/accel/error/accel_error.o
00:05:50.622    LIB libspdk_env_dpdk_rpc.a
00:05:50.622    SO libspdk_env_dpdk_rpc.so.6.0
00:05:50.622    SYMLINK libspdk_env_dpdk_rpc.so
00:05:50.622    CC module/accel/error/accel_error_rpc.o
00:05:50.622    CC module/keyring/file/keyring_rpc.o
00:05:50.622    LIB libspdk_scheduler_dpdk_governor.a
00:05:50.622    LIB libspdk_scheduler_gscheduler.a
00:05:50.622    SO libspdk_scheduler_dpdk_governor.so.4.0
00:05:50.622    SO libspdk_scheduler_gscheduler.so.4.0
00:05:50.622    CC module/accel/ioat/accel_ioat_rpc.o
00:05:50.622    LIB libspdk_scheduler_dynamic.a
00:05:50.882    CC module/fsdev/aio/fsdev_aio_rpc.o
00:05:50.882    SO libspdk_scheduler_dynamic.so.4.0
00:05:50.882    SYMLINK libspdk_scheduler_dpdk_governor.so
00:05:50.882    SYMLINK libspdk_scheduler_gscheduler.so
00:05:50.882    SYMLINK libspdk_scheduler_dynamic.so
00:05:50.882    CC module/fsdev/aio/linux_aio_mgr.o
00:05:50.882    LIB libspdk_accel_error.a
00:05:50.882    LIB libspdk_keyring_file.a
00:05:50.882    SO libspdk_accel_error.so.2.0
00:05:50.882    SO libspdk_keyring_file.so.2.0
00:05:50.882    LIB libspdk_accel_ioat.a
00:05:50.882    LIB libspdk_blob_bdev.a
00:05:50.882    SO libspdk_accel_ioat.so.6.0
00:05:50.882    SYMLINK libspdk_keyring_file.so
00:05:50.882    SO libspdk_blob_bdev.so.11.0
00:05:50.882    SYMLINK libspdk_accel_error.so
00:05:50.882    CC module/accel/iaa/accel_iaa.o
00:05:50.882    CC module/accel/iaa/accel_iaa_rpc.o
00:05:50.882    CC module/accel/dsa/accel_dsa_rpc.o
00:05:50.882    CC module/accel/dsa/accel_dsa.o
00:05:50.882    SYMLINK libspdk_accel_ioat.so
00:05:50.882    SYMLINK libspdk_blob_bdev.so
00:05:51.141    CC module/keyring/linux/keyring.o
00:05:51.141    CC module/keyring/linux/keyring_rpc.o
00:05:51.141    LIB libspdk_accel_iaa.a
00:05:51.141    SO libspdk_accel_iaa.so.3.0
00:05:51.141    CC module/bdev/error/vbdev_error.o
00:05:51.141    LIB libspdk_keyring_linux.a
00:05:51.141    CC module/bdev/delay/vbdev_delay.o
00:05:51.141    CC module/blobfs/bdev/blobfs_bdev.o
00:05:51.141    CC module/bdev/gpt/gpt.o
00:05:51.141    SO libspdk_keyring_linux.so.1.0
00:05:51.141    SYMLINK libspdk_accel_iaa.so
00:05:51.141    CC module/bdev/gpt/vbdev_gpt.o
00:05:51.400    SYMLINK libspdk_keyring_linux.so
00:05:51.400    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:05:51.400    LIB libspdk_fsdev_aio.a
00:05:51.400    LIB libspdk_accel_dsa.a
00:05:51.400    SO libspdk_fsdev_aio.so.1.0
00:05:51.400    SO libspdk_accel_dsa.so.5.0
00:05:51.400    LIB libspdk_sock_posix.a
00:05:51.400    CC module/bdev/lvol/vbdev_lvol.o
00:05:51.400    SO libspdk_sock_posix.so.6.0
00:05:51.400    SYMLINK libspdk_fsdev_aio.so
00:05:51.400    SYMLINK libspdk_accel_dsa.so
00:05:51.400    CC module/bdev/delay/vbdev_delay_rpc.o
00:05:51.400    CC module/bdev/error/vbdev_error_rpc.o
00:05:51.400    SYMLINK libspdk_sock_posix.so
00:05:51.400    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:05:51.400    LIB libspdk_blobfs_bdev.a
00:05:51.658    SO libspdk_blobfs_bdev.so.6.0
00:05:51.658    LIB libspdk_bdev_gpt.a
00:05:51.658    SO libspdk_bdev_gpt.so.6.0
00:05:51.658    LIB libspdk_bdev_error.a
00:05:51.658    SYMLINK libspdk_blobfs_bdev.so
00:05:51.658    CC module/bdev/null/bdev_null.o
00:05:51.658    SO libspdk_bdev_error.so.6.0
00:05:51.658    CC module/bdev/malloc/bdev_malloc.o
00:05:51.658    SYMLINK libspdk_bdev_gpt.so
00:05:51.658    CC module/bdev/nvme/bdev_nvme.o
00:05:51.658    LIB libspdk_bdev_delay.a
00:05:51.658    SO libspdk_bdev_delay.so.6.0
00:05:51.658    SYMLINK libspdk_bdev_error.so
00:05:51.658    CC module/bdev/null/bdev_null_rpc.o
00:05:51.658    SYMLINK libspdk_bdev_delay.so
00:05:51.658    CC module/bdev/malloc/bdev_malloc_rpc.o
00:05:51.922    CC module/bdev/passthru/vbdev_passthru.o
00:05:51.922    CC module/bdev/raid/bdev_raid.o
00:05:51.922    CC module/bdev/split/vbdev_split.o
00:05:51.922    CC module/bdev/raid/bdev_raid_rpc.o
00:05:51.922    CC module/bdev/raid/bdev_raid_sb.o
00:05:51.922    LIB libspdk_bdev_null.a
00:05:51.922    LIB libspdk_bdev_lvol.a
00:05:51.922    SO libspdk_bdev_null.so.6.0
00:05:51.922    CC module/bdev/raid/raid0.o
00:05:51.922    SO libspdk_bdev_lvol.so.6.0
00:05:51.922    SYMLINK libspdk_bdev_null.so
00:05:51.922    CC module/bdev/nvme/bdev_nvme_rpc.o
00:05:51.922    CC module/bdev/split/vbdev_split_rpc.o
00:05:51.922    SYMLINK libspdk_bdev_lvol.so
00:05:51.922    CC module/bdev/nvme/nvme_rpc.o
00:05:52.188    CC module/bdev/raid/raid1.o
00:05:52.188    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:05:52.188    LIB libspdk_bdev_malloc.a
00:05:52.188    SO libspdk_bdev_malloc.so.6.0
00:05:52.188    CC module/bdev/raid/concat.o
00:05:52.188    LIB libspdk_bdev_split.a
00:05:52.188    SO libspdk_bdev_split.so.6.0
00:05:52.188    SYMLINK libspdk_bdev_malloc.so
00:05:52.188    CC module/bdev/raid/raid5f.o
00:05:52.188    SYMLINK libspdk_bdev_split.so
00:05:52.188    LIB libspdk_bdev_passthru.a
00:05:52.188    CC module/bdev/nvme/bdev_mdns_client.o
00:05:52.447    SO libspdk_bdev_passthru.so.6.0
00:05:52.447    CC module/bdev/nvme/vbdev_opal.o
00:05:52.447    SYMLINK libspdk_bdev_passthru.so
00:05:52.447    CC module/bdev/zone_block/vbdev_zone_block.o
00:05:52.447    CC module/bdev/aio/bdev_aio.o
00:05:52.447    CC module/bdev/nvme/vbdev_opal_rpc.o
00:05:52.706    CC module/bdev/ftl/bdev_ftl.o
00:05:52.706    CC module/bdev/iscsi/bdev_iscsi.o
00:05:52.706    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:05:52.706    CC module/bdev/aio/bdev_aio_rpc.o
00:05:52.706    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:05:52.965    CC module/bdev/ftl/bdev_ftl_rpc.o
00:05:52.965    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:05:52.965    LIB libspdk_bdev_aio.a
00:05:52.965    SO libspdk_bdev_aio.so.6.0
00:05:52.965    SYMLINK libspdk_bdev_aio.so
00:05:52.965    LIB libspdk_bdev_raid.a
00:05:52.965    LIB libspdk_bdev_iscsi.a
00:05:52.965    LIB libspdk_bdev_zone_block.a
00:05:52.965    CC module/bdev/virtio/bdev_virtio_scsi.o
00:05:52.965    CC module/bdev/virtio/bdev_virtio_rpc.o
00:05:52.965    CC module/bdev/virtio/bdev_virtio_blk.o
00:05:52.965    SO libspdk_bdev_raid.so.6.0
00:05:52.965    SO libspdk_bdev_iscsi.so.6.0
00:05:52.965    LIB libspdk_bdev_ftl.a
00:05:53.224    SO libspdk_bdev_zone_block.so.6.0
00:05:53.224    SO libspdk_bdev_ftl.so.6.0
00:05:53.224    SYMLINK libspdk_bdev_iscsi.so
00:05:53.224    SYMLINK libspdk_bdev_zone_block.so
00:05:53.224    SYMLINK libspdk_bdev_raid.so
00:05:53.224    SYMLINK libspdk_bdev_ftl.so
00:05:53.484    LIB libspdk_bdev_virtio.a
00:05:53.742    SO libspdk_bdev_virtio.so.6.0
00:05:53.742    SYMLINK libspdk_bdev_virtio.so
00:05:54.310    LIB libspdk_bdev_nvme.a
00:05:54.310    SO libspdk_bdev_nvme.so.7.0
00:05:54.570    SYMLINK libspdk_bdev_nvme.so
00:05:55.139    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:05:55.139    CC module/event/subsystems/vmd/vmd.o
00:05:55.139    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:05:55.139    CC module/event/subsystems/iobuf/iobuf.o
00:05:55.139    CC module/event/subsystems/vmd/vmd_rpc.o
00:05:55.139    CC module/event/subsystems/fsdev/fsdev.o
00:05:55.139    CC module/event/subsystems/scheduler/scheduler.o
00:05:55.139    CC module/event/subsystems/keyring/keyring.o
00:05:55.139    CC module/event/subsystems/sock/sock.o
00:05:55.139    LIB libspdk_event_vhost_blk.a
00:05:55.139    LIB libspdk_event_keyring.a
00:05:55.139    LIB libspdk_event_scheduler.a
00:05:55.139    LIB libspdk_event_iobuf.a
00:05:55.139    LIB libspdk_event_fsdev.a
00:05:55.139    SO libspdk_event_vhost_blk.so.3.0
00:05:55.139    SO libspdk_event_keyring.so.1.0
00:05:55.139    LIB libspdk_event_vmd.a
00:05:55.139    LIB libspdk_event_sock.a
00:05:55.139    SO libspdk_event_fsdev.so.1.0
00:05:55.139    SO libspdk_event_scheduler.so.4.0
00:05:55.139    SO libspdk_event_iobuf.so.3.0
00:05:55.139    SO libspdk_event_sock.so.5.0
00:05:55.398    SO libspdk_event_vmd.so.6.0
00:05:55.398    SYMLINK libspdk_event_keyring.so
00:05:55.398    SYMLINK libspdk_event_fsdev.so
00:05:55.398    SYMLINK libspdk_event_vhost_blk.so
00:05:55.398    SYMLINK libspdk_event_scheduler.so
00:05:55.398    SYMLINK libspdk_event_iobuf.so
00:05:55.398    SYMLINK libspdk_event_sock.so
00:05:55.398    SYMLINK libspdk_event_vmd.so
00:05:55.658    CC module/event/subsystems/accel/accel.o
00:05:55.917    LIB libspdk_event_accel.a
00:05:55.917    SO libspdk_event_accel.so.6.0
00:05:55.917    SYMLINK libspdk_event_accel.so
00:05:56.486    CC module/event/subsystems/bdev/bdev.o
00:05:56.486    LIB libspdk_event_bdev.a
00:05:56.486    SO libspdk_event_bdev.so.6.0
00:05:56.745    SYMLINK libspdk_event_bdev.so
00:05:57.005    CC module/event/subsystems/ublk/ublk.o
00:05:57.005    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:05:57.005    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:05:57.005    CC module/event/subsystems/nbd/nbd.o
00:05:57.005    CC module/event/subsystems/scsi/scsi.o
00:05:57.005    LIB libspdk_event_ublk.a
00:05:57.347    LIB libspdk_event_nbd.a
00:05:57.347    SO libspdk_event_ublk.so.3.0
00:05:57.347    LIB libspdk_event_scsi.a
00:05:57.347    SO libspdk_event_nbd.so.6.0
00:05:57.347    SO libspdk_event_scsi.so.6.0
00:05:57.347    LIB libspdk_event_nvmf.a
00:05:57.347    SYMLINK libspdk_event_ublk.so
00:05:57.347    SO libspdk_event_nvmf.so.6.0
00:05:57.347    SYMLINK libspdk_event_scsi.so
00:05:57.347    SYMLINK libspdk_event_nbd.so
00:05:57.347    SYMLINK libspdk_event_nvmf.so
00:05:57.607    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:05:57.607    CC module/event/subsystems/iscsi/iscsi.o
00:05:57.867    LIB libspdk_event_vhost_scsi.a
00:05:57.867    SO libspdk_event_vhost_scsi.so.3.0
00:05:57.867    LIB libspdk_event_iscsi.a
00:05:57.867    SO libspdk_event_iscsi.so.6.0
00:05:57.867    SYMLINK libspdk_event_vhost_scsi.so
00:05:57.867    SYMLINK libspdk_event_iscsi.so
00:05:58.127    SO libspdk.so.6.0
00:05:58.127    SYMLINK libspdk.so
00:05:58.386    CC test/rpc_client/rpc_client_test.o
00:05:58.386    TEST_HEADER include/spdk/accel.h
00:05:58.386    TEST_HEADER include/spdk/accel_module.h
00:05:58.386    TEST_HEADER include/spdk/assert.h
00:05:58.386    CXX app/trace/trace.o
00:05:58.386    TEST_HEADER include/spdk/barrier.h
00:05:58.386    TEST_HEADER include/spdk/base64.h
00:05:58.386    TEST_HEADER include/spdk/bdev.h
00:05:58.386    TEST_HEADER include/spdk/bdev_module.h
00:05:58.386    TEST_HEADER include/spdk/bdev_zone.h
00:05:58.386    TEST_HEADER include/spdk/bit_array.h
00:05:58.386    TEST_HEADER include/spdk/bit_pool.h
00:05:58.386    TEST_HEADER include/spdk/blob_bdev.h
00:05:58.386    CC examples/interrupt_tgt/interrupt_tgt.o
00:05:58.386    TEST_HEADER include/spdk/blobfs_bdev.h
00:05:58.386    TEST_HEADER include/spdk/blobfs.h
00:05:58.386    TEST_HEADER include/spdk/blob.h
00:05:58.386    TEST_HEADER include/spdk/conf.h
00:05:58.386    TEST_HEADER include/spdk/config.h
00:05:58.386    TEST_HEADER include/spdk/cpuset.h
00:05:58.386    TEST_HEADER include/spdk/crc16.h
00:05:58.386    TEST_HEADER include/spdk/crc32.h
00:05:58.386    TEST_HEADER include/spdk/crc64.h
00:05:58.386    TEST_HEADER include/spdk/dif.h
00:05:58.386    TEST_HEADER include/spdk/dma.h
00:05:58.386    TEST_HEADER include/spdk/endian.h
00:05:58.386    TEST_HEADER include/spdk/env_dpdk.h
00:05:58.386    TEST_HEADER include/spdk/env.h
00:05:58.386    TEST_HEADER include/spdk/event.h
00:05:58.386    TEST_HEADER include/spdk/fd_group.h
00:05:58.386    TEST_HEADER include/spdk/fd.h
00:05:58.386    TEST_HEADER include/spdk/file.h
00:05:58.386    TEST_HEADER include/spdk/fsdev.h
00:05:58.386    TEST_HEADER include/spdk/fsdev_module.h
00:05:58.386    TEST_HEADER include/spdk/ftl.h
00:05:58.386    TEST_HEADER include/spdk/fuse_dispatcher.h
00:05:58.386    TEST_HEADER include/spdk/gpt_spec.h
00:05:58.386    TEST_HEADER include/spdk/hexlify.h
00:05:58.386    TEST_HEADER include/spdk/histogram_data.h
00:05:58.386    CC examples/ioat/perf/perf.o
00:05:58.386    TEST_HEADER include/spdk/idxd.h
00:05:58.386    TEST_HEADER include/spdk/idxd_spec.h
00:05:58.386    CC examples/util/zipf/zipf.o
00:05:58.386    TEST_HEADER include/spdk/init.h
00:05:58.386    TEST_HEADER include/spdk/ioat.h
00:05:58.386    CC test/thread/poller_perf/poller_perf.o
00:05:58.386    TEST_HEADER include/spdk/ioat_spec.h
00:05:58.386    TEST_HEADER include/spdk/iscsi_spec.h
00:05:58.386    TEST_HEADER include/spdk/json.h
00:05:58.386    TEST_HEADER include/spdk/jsonrpc.h
00:05:58.386    TEST_HEADER include/spdk/keyring.h
00:05:58.387    TEST_HEADER include/spdk/keyring_module.h
00:05:58.387    TEST_HEADER include/spdk/likely.h
00:05:58.646    TEST_HEADER include/spdk/log.h
00:05:58.646    TEST_HEADER include/spdk/lvol.h
00:05:58.646    TEST_HEADER include/spdk/md5.h
00:05:58.646    TEST_HEADER include/spdk/memory.h
00:05:58.646    TEST_HEADER include/spdk/mmio.h
00:05:58.646    TEST_HEADER include/spdk/nbd.h
00:05:58.646    TEST_HEADER include/spdk/net.h
00:05:58.646    TEST_HEADER include/spdk/notify.h
00:05:58.646    TEST_HEADER include/spdk/nvme.h
00:05:58.646    CC test/dma/test_dma/test_dma.o
00:05:58.646    TEST_HEADER include/spdk/nvme_intel.h
00:05:58.646    TEST_HEADER include/spdk/nvme_ocssd.h
00:05:58.646    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:05:58.646    TEST_HEADER include/spdk/nvme_spec.h
00:05:58.646    TEST_HEADER include/spdk/nvme_zns.h
00:05:58.646    TEST_HEADER include/spdk/nvmf_cmd.h
00:05:58.646    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:05:58.646    TEST_HEADER include/spdk/nvmf.h
00:05:58.647    TEST_HEADER include/spdk/nvmf_spec.h
00:05:58.647    TEST_HEADER include/spdk/nvmf_transport.h
00:05:58.647    TEST_HEADER include/spdk/opal.h
00:05:58.647    TEST_HEADER include/spdk/opal_spec.h
00:05:58.647    TEST_HEADER include/spdk/pci_ids.h
00:05:58.647    TEST_HEADER include/spdk/pipe.h
00:05:58.647    TEST_HEADER include/spdk/queue.h
00:05:58.647    CC test/app/bdev_svc/bdev_svc.o
00:05:58.647    TEST_HEADER include/spdk/reduce.h
00:05:58.647    TEST_HEADER include/spdk/rpc.h
00:05:58.647    TEST_HEADER include/spdk/scheduler.h
00:05:58.647    TEST_HEADER include/spdk/scsi.h
00:05:58.647    TEST_HEADER include/spdk/scsi_spec.h
00:05:58.647    TEST_HEADER include/spdk/sock.h
00:05:58.647    TEST_HEADER include/spdk/stdinc.h
00:05:58.647    TEST_HEADER include/spdk/string.h
00:05:58.647    TEST_HEADER include/spdk/thread.h
00:05:58.647    CC test/env/mem_callbacks/mem_callbacks.o
00:05:58.647    TEST_HEADER include/spdk/trace.h
00:05:58.647    TEST_HEADER include/spdk/trace_parser.h
00:05:58.647    TEST_HEADER include/spdk/tree.h
00:05:58.647    TEST_HEADER include/spdk/ublk.h
00:05:58.647    TEST_HEADER include/spdk/util.h
00:05:58.647    TEST_HEADER include/spdk/uuid.h
00:05:58.647    TEST_HEADER include/spdk/version.h
00:05:58.647    TEST_HEADER include/spdk/vfio_user_pci.h
00:05:58.647    TEST_HEADER include/spdk/vfio_user_spec.h
00:05:58.647    TEST_HEADER include/spdk/vhost.h
00:05:58.647    TEST_HEADER include/spdk/vmd.h
00:05:58.647    TEST_HEADER include/spdk/xor.h
00:05:58.647    TEST_HEADER include/spdk/zipf.h
00:05:58.647    CXX test/cpp_headers/accel.o
00:05:58.647    LINK rpc_client_test
00:05:58.647    LINK poller_perf
00:05:58.647    LINK zipf
00:05:58.647    LINK interrupt_tgt
00:05:58.647    LINK ioat_perf
00:05:58.647    LINK bdev_svc
00:05:58.906    CXX test/cpp_headers/accel_module.o
00:05:58.906    LINK spdk_trace
00:05:58.906    CC examples/ioat/verify/verify.o
00:05:58.906    CC test/env/vtophys/vtophys.o
00:05:58.906    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:05:58.906    CXX test/cpp_headers/assert.o
00:05:58.906    CC test/env/memory/memory_ut.o
00:05:58.906    CC test/event/event_perf/event_perf.o
00:05:59.165    LINK vtophys
00:05:59.165    LINK test_dma
00:05:59.165    LINK env_dpdk_post_init
00:05:59.165    CXX test/cpp_headers/barrier.o
00:05:59.165    LINK verify
00:05:59.165    CC app/trace_record/trace_record.o
00:05:59.165    LINK event_perf
00:05:59.165    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:05:59.165    LINK mem_callbacks
00:05:59.165    CXX test/cpp_headers/base64.o
00:05:59.424    CC test/env/pci/pci_ut.o
00:05:59.424    CC test/event/reactor/reactor.o
00:05:59.424    CC test/event/reactor_perf/reactor_perf.o
00:05:59.424    CC test/event/app_repeat/app_repeat.o
00:05:59.424    LINK spdk_trace_record
00:05:59.424    CXX test/cpp_headers/bdev.o
00:05:59.424    CC test/event/scheduler/scheduler.o
00:05:59.424    CC examples/thread/thread/thread_ex.o
00:05:59.424    LINK reactor
00:05:59.424    LINK reactor_perf
00:05:59.683    LINK nvme_fuzz
00:05:59.683    LINK app_repeat
00:05:59.683    CXX test/cpp_headers/bdev_module.o
00:05:59.683    LINK scheduler
00:05:59.683    CC app/nvmf_tgt/nvmf_main.o
00:05:59.683    LINK thread
00:05:59.683    LINK pci_ut
00:05:59.683    CXX test/cpp_headers/bdev_zone.o
00:05:59.683    CC examples/sock/hello_world/hello_sock.o
00:05:59.942    CC examples/vmd/lsvmd/lsvmd.o
00:05:59.942    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:05:59.942    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:05:59.942    LINK nvmf_tgt
00:05:59.942    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:05:59.942    LINK lsvmd
00:05:59.942    CXX test/cpp_headers/bit_array.o
00:05:59.942    CXX test/cpp_headers/bit_pool.o
00:06:00.201    LINK hello_sock
00:06:00.201    CC test/accel/dif/dif.o
00:06:00.201    CXX test/cpp_headers/blob_bdev.o
00:06:00.201    CC examples/vmd/led/led.o
00:06:00.201    LINK memory_ut
00:06:00.201    CC test/blobfs/mkfs/mkfs.o
00:06:00.201    CC app/iscsi_tgt/iscsi_tgt.o
00:06:00.201    CXX test/cpp_headers/blobfs_bdev.o
00:06:00.462    LINK led
00:06:00.462    LINK vhost_fuzz
00:06:00.462    CC test/lvol/esnap/esnap.o
00:06:00.462    CC test/app/histogram_perf/histogram_perf.o
00:06:00.462    LINK mkfs
00:06:00.462    LINK iscsi_tgt
00:06:00.462    CXX test/cpp_headers/blobfs.o
00:06:00.462    CC test/app/jsoncat/jsoncat.o
00:06:00.722    LINK histogram_perf
00:06:00.722    LINK jsoncat
00:06:00.722    CC examples/idxd/perf/perf.o
00:06:00.722    CXX test/cpp_headers/blob.o
00:06:00.722    CC test/nvme/aer/aer.o
00:06:00.722    CC test/nvme/reset/reset.o
00:06:00.982    CC test/app/stub/stub.o
00:06:00.982    CC app/spdk_tgt/spdk_tgt.o
00:06:00.982    CXX test/cpp_headers/conf.o
00:06:00.982    LINK dif
00:06:00.982    LINK reset
00:06:00.982    LINK idxd_perf
00:06:00.982    LINK aer
00:06:01.242    LINK stub
00:06:01.242    CC examples/fsdev/hello_world/hello_fsdev.o
00:06:01.242    CXX test/cpp_headers/config.o
00:06:01.242    LINK spdk_tgt
00:06:01.242    CXX test/cpp_headers/cpuset.o
00:06:01.242    CXX test/cpp_headers/crc16.o
00:06:01.242    CC test/nvme/e2edp/nvme_dp.o
00:06:01.242    CC test/nvme/sgl/sgl.o
00:06:01.501    CC test/nvme/overhead/overhead.o
00:06:01.501    CC test/nvme/err_injection/err_injection.o
00:06:01.501    CC app/spdk_lspci/spdk_lspci.o
00:06:01.501    LINK hello_fsdev
00:06:01.501    CC test/bdev/bdevio/bdevio.o
00:06:01.501    CXX test/cpp_headers/crc32.o
00:06:01.501    LINK spdk_lspci
00:06:01.501    LINK err_injection
00:06:01.761    LINK nvme_dp
00:06:01.761    CXX test/cpp_headers/crc64.o
00:06:01.761    LINK sgl
00:06:01.761    LINK overhead
00:06:01.761    CXX test/cpp_headers/dif.o
00:06:01.761    CC examples/accel/perf/accel_perf.o
00:06:01.761    CC app/spdk_nvme_perf/perf.o
00:06:01.761    CC test/nvme/startup/startup.o
00:06:02.021    LINK bdevio
00:06:02.021    CC test/nvme/reserve/reserve.o
00:06:02.021    CC test/nvme/simple_copy/simple_copy.o
00:06:02.021    LINK iscsi_fuzz
00:06:02.021    CXX test/cpp_headers/dma.o
00:06:02.021    CC examples/blob/hello_world/hello_blob.o
00:06:02.021    LINK startup
00:06:02.021    CXX test/cpp_headers/endian.o
00:06:02.280    LINK reserve
00:06:02.280    CXX test/cpp_headers/env_dpdk.o
00:06:02.280    LINK simple_copy
00:06:02.280    LINK hello_blob
00:06:02.280    CC test/nvme/connect_stress/connect_stress.o
00:06:02.280    CC examples/nvme/hello_world/hello_world.o
00:06:02.280    CXX test/cpp_headers/env.o
00:06:02.280    CC examples/nvme/reconnect/reconnect.o
00:06:02.280    LINK accel_perf
00:06:02.540    CC examples/nvme/nvme_manage/nvme_manage.o
00:06:02.540    CC examples/nvme/arbitration/arbitration.o
00:06:02.540    LINK connect_stress
00:06:02.540    CXX test/cpp_headers/event.o
00:06:02.540    LINK hello_world
00:06:02.540    CC examples/blob/cli/blobcli.o
00:06:02.540    CC app/spdk_nvme_identify/identify.o
00:06:02.799    CXX test/cpp_headers/fd_group.o
00:06:02.799    CXX test/cpp_headers/fd.o
00:06:02.799    CC test/nvme/boot_partition/boot_partition.o
00:06:02.799    LINK reconnect
00:06:02.799    LINK arbitration
00:06:02.799    LINK spdk_nvme_perf
00:06:02.799    CXX test/cpp_headers/file.o
00:06:03.058    CC examples/nvme/hotplug/hotplug.o
00:06:03.058    LINK boot_partition
00:06:03.058    CXX test/cpp_headers/fsdev.o
00:06:03.058    CXX test/cpp_headers/fsdev_module.o
00:06:03.058    LINK nvme_manage
00:06:03.058    CXX test/cpp_headers/ftl.o
00:06:03.058    CC examples/nvme/cmb_copy/cmb_copy.o
00:06:03.058    CC test/nvme/compliance/nvme_compliance.o
00:06:03.058    LINK hotplug
00:06:03.319    LINK blobcli
00:06:03.319    CC examples/nvme/abort/abort.o
00:06:03.319    CXX test/cpp_headers/fuse_dispatcher.o
00:06:03.319    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:06:03.319    CC app/spdk_nvme_discover/discovery_aer.o
00:06:03.319    LINK cmb_copy
00:06:03.319    CXX test/cpp_headers/gpt_spec.o
00:06:03.579    CC test/nvme/fused_ordering/fused_ordering.o
00:06:03.579    LINK pmr_persistence
00:06:03.579    CC test/nvme/doorbell_aers/doorbell_aers.o
00:06:03.579    LINK spdk_nvme_discover
00:06:03.579    CXX test/cpp_headers/hexlify.o
00:06:03.579    LINK nvme_compliance
00:06:03.579    LINK abort
00:06:03.579    CXX test/cpp_headers/histogram_data.o
00:06:03.579    LINK spdk_nvme_identify
00:06:03.579    CXX test/cpp_headers/idxd.o
00:06:03.579    CXX test/cpp_headers/idxd_spec.o
00:06:03.579    LINK fused_ordering
00:06:03.579    LINK doorbell_aers
00:06:03.839    CC examples/bdev/hello_world/hello_bdev.o
00:06:03.839    CXX test/cpp_headers/init.o
00:06:03.839    CXX test/cpp_headers/ioat.o
00:06:03.839    CXX test/cpp_headers/ioat_spec.o
00:06:03.839    CC examples/bdev/bdevperf/bdevperf.o
00:06:03.839    CC test/nvme/fdp/fdp.o
00:06:03.839    CC test/nvme/cuse/cuse.o
00:06:03.839    CXX test/cpp_headers/iscsi_spec.o
00:06:03.839    CC app/spdk_top/spdk_top.o
00:06:04.098    CXX test/cpp_headers/json.o
00:06:04.098    CXX test/cpp_headers/jsonrpc.o
00:06:04.098    LINK hello_bdev
00:06:04.098    CXX test/cpp_headers/keyring.o
00:06:04.098    CC app/vhost/vhost.o
00:06:04.098    CXX test/cpp_headers/keyring_module.o
00:06:04.357    CXX test/cpp_headers/likely.o
00:06:04.357    LINK fdp
00:06:04.357    CC app/spdk_dd/spdk_dd.o
00:06:04.357    LINK vhost
00:06:04.357    CC app/fio/nvme/fio_plugin.o
00:06:04.357    CXX test/cpp_headers/log.o
00:06:04.357    CXX test/cpp_headers/lvol.o
00:06:04.616    CXX test/cpp_headers/md5.o
00:06:04.616    CC app/fio/bdev/fio_plugin.o
00:06:04.616    CXX test/cpp_headers/memory.o
00:06:04.616    CXX test/cpp_headers/mmio.o
00:06:04.616    LINK spdk_dd
00:06:04.616    CXX test/cpp_headers/nbd.o
00:06:04.876    CXX test/cpp_headers/net.o
00:06:04.876    CXX test/cpp_headers/notify.o
00:06:04.876    CXX test/cpp_headers/nvme.o
00:06:04.876    LINK bdevperf
00:06:04.876    CXX test/cpp_headers/nvme_intel.o
00:06:04.876    CXX test/cpp_headers/nvme_ocssd.o
00:06:04.876    CXX test/cpp_headers/nvme_ocssd_spec.o
00:06:04.876    LINK spdk_top
00:06:04.876    CXX test/cpp_headers/nvme_spec.o
00:06:04.876    LINK spdk_nvme
00:06:05.135    CXX test/cpp_headers/nvme_zns.o
00:06:05.135    LINK spdk_bdev
00:06:05.135    CXX test/cpp_headers/nvmf_cmd.o
00:06:05.135    CXX test/cpp_headers/nvmf_fc_spec.o
00:06:05.135    CXX test/cpp_headers/nvmf.o
00:06:05.135    CXX test/cpp_headers/nvmf_spec.o
00:06:05.135    CXX test/cpp_headers/nvmf_transport.o
00:06:05.135    CXX test/cpp_headers/opal.o
00:06:05.135    CC examples/nvmf/nvmf/nvmf.o
00:06:05.135    CXX test/cpp_headers/opal_spec.o
00:06:05.395    CXX test/cpp_headers/pci_ids.o
00:06:05.395    CXX test/cpp_headers/pipe.o
00:06:05.395    LINK cuse
00:06:05.395    CXX test/cpp_headers/queue.o
00:06:05.395    CXX test/cpp_headers/reduce.o
00:06:05.395    CXX test/cpp_headers/rpc.o
00:06:05.395    CXX test/cpp_headers/scheduler.o
00:06:05.395    CXX test/cpp_headers/scsi.o
00:06:05.395    CXX test/cpp_headers/scsi_spec.o
00:06:05.395    CXX test/cpp_headers/sock.o
00:06:05.395    CXX test/cpp_headers/stdinc.o
00:06:05.395    CXX test/cpp_headers/string.o
00:06:05.395    CXX test/cpp_headers/thread.o
00:06:05.662    CXX test/cpp_headers/trace.o
00:06:05.662    LINK nvmf
00:06:05.662    CXX test/cpp_headers/trace_parser.o
00:06:05.662    CXX test/cpp_headers/tree.o
00:06:05.662    CXX test/cpp_headers/ublk.o
00:06:05.662    CXX test/cpp_headers/util.o
00:06:05.662    CXX test/cpp_headers/uuid.o
00:06:05.662    CXX test/cpp_headers/version.o
00:06:05.662    CXX test/cpp_headers/vfio_user_pci.o
00:06:05.662    CXX test/cpp_headers/vfio_user_spec.o
00:06:05.662    CXX test/cpp_headers/vhost.o
00:06:05.662    CXX test/cpp_headers/vmd.o
00:06:05.662    CXX test/cpp_headers/xor.o
00:06:05.662    CXX test/cpp_headers/zipf.o
00:06:07.064    LINK esnap
00:06:07.064  ************************************
00:06:07.064  END TEST make
00:06:07.064  ************************************
00:06:07.064  
00:06:07.064  real	1m23.122s
00:06:07.064  user	6m50.739s
00:06:07.064  sys	1m13.675s
00:06:07.064   11:27:33 make -- common/autotest_common.sh@1126 -- $ xtrace_disable
00:06:07.064   11:27:33 make -- common/autotest_common.sh@10 -- $ set +x
00:06:07.064   11:27:33  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:06:07.064   11:27:33  -- pm/common@29 -- $ signal_monitor_resources TERM
00:06:07.064   11:27:33  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:06:07.064   11:27:33  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:06:07.064   11:27:33  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:06:07.064   11:27:33  -- pm/common@44 -- $ pid=6200
00:06:07.064   11:27:33  -- pm/common@50 -- $ kill -TERM 6200
00:06:07.064   11:27:33  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:06:07.064   11:27:33  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:06:07.064   11:27:33  -- pm/common@44 -- $ pid=6202
00:06:07.064   11:27:33  -- pm/common@50 -- $ kill -TERM 6202
00:06:07.324    11:27:33  -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:06:07.324     11:27:33  -- common/autotest_common.sh@1681 -- # lcov --version
00:06:07.324     11:27:33  -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:06:07.324    11:27:33  -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:06:07.324    11:27:33  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:07.324    11:27:33  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:07.324    11:27:33  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:07.324    11:27:33  -- scripts/common.sh@336 -- # IFS=.-:
00:06:07.324    11:27:33  -- scripts/common.sh@336 -- # read -ra ver1
00:06:07.324    11:27:33  -- scripts/common.sh@337 -- # IFS=.-:
00:06:07.324    11:27:33  -- scripts/common.sh@337 -- # read -ra ver2
00:06:07.324    11:27:33  -- scripts/common.sh@338 -- # local 'op=<'
00:06:07.324    11:27:33  -- scripts/common.sh@340 -- # ver1_l=2
00:06:07.324    11:27:33  -- scripts/common.sh@341 -- # ver2_l=1
00:06:07.324    11:27:33  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:07.324    11:27:33  -- scripts/common.sh@344 -- # case "$op" in
00:06:07.324    11:27:33  -- scripts/common.sh@345 -- # : 1
00:06:07.324    11:27:33  -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:07.324    11:27:33  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:07.324     11:27:33  -- scripts/common.sh@365 -- # decimal 1
00:06:07.324     11:27:33  -- scripts/common.sh@353 -- # local d=1
00:06:07.324     11:27:33  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:07.324     11:27:33  -- scripts/common.sh@355 -- # echo 1
00:06:07.324    11:27:33  -- scripts/common.sh@365 -- # ver1[v]=1
00:06:07.324     11:27:33  -- scripts/common.sh@366 -- # decimal 2
00:06:07.324     11:27:33  -- scripts/common.sh@353 -- # local d=2
00:06:07.324     11:27:33  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:07.324     11:27:33  -- scripts/common.sh@355 -- # echo 2
00:06:07.324    11:27:33  -- scripts/common.sh@366 -- # ver2[v]=2
00:06:07.324    11:27:33  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:07.324    11:27:33  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:07.324    11:27:33  -- scripts/common.sh@368 -- # return 0
00:06:07.324    11:27:33  -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:07.324    11:27:33  -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:06:07.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.324  		--rc genhtml_branch_coverage=1
00:06:07.324  		--rc genhtml_function_coverage=1
00:06:07.324  		--rc genhtml_legend=1
00:06:07.324  		--rc geninfo_all_blocks=1
00:06:07.324  		--rc geninfo_unexecuted_blocks=1
00:06:07.324  		
00:06:07.324  		'
00:06:07.324    11:27:33  -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:06:07.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.324  		--rc genhtml_branch_coverage=1
00:06:07.324  		--rc genhtml_function_coverage=1
00:06:07.324  		--rc genhtml_legend=1
00:06:07.324  		--rc geninfo_all_blocks=1
00:06:07.324  		--rc geninfo_unexecuted_blocks=1
00:06:07.324  		
00:06:07.324  		'
00:06:07.324    11:27:33  -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:06:07.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.324  		--rc genhtml_branch_coverage=1
00:06:07.324  		--rc genhtml_function_coverage=1
00:06:07.324  		--rc genhtml_legend=1
00:06:07.324  		--rc geninfo_all_blocks=1
00:06:07.324  		--rc geninfo_unexecuted_blocks=1
00:06:07.324  		
00:06:07.324  		'
00:06:07.324    11:27:33  -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:06:07.324  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.324  		--rc genhtml_branch_coverage=1
00:06:07.324  		--rc genhtml_function_coverage=1
00:06:07.324  		--rc genhtml_legend=1
00:06:07.324  		--rc geninfo_all_blocks=1
00:06:07.324  		--rc geninfo_unexecuted_blocks=1
00:06:07.324  		
00:06:07.324  		'
00:06:07.324   11:27:33  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:07.324     11:27:33  -- nvmf/common.sh@7 -- # uname -s
00:06:07.324    11:27:33  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:07.324    11:27:33  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:07.324    11:27:33  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:07.324    11:27:33  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:07.324    11:27:33  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:07.324    11:27:33  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:07.324    11:27:33  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:07.324    11:27:33  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:07.324    11:27:33  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:07.324     11:27:33  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:07.324    11:27:33  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52311dfc-f4ec-4043-8e88-1c9590101b2f
00:06:07.324    11:27:33  -- nvmf/common.sh@18 -- # NVME_HOSTID=52311dfc-f4ec-4043-8e88-1c9590101b2f
00:06:07.324    11:27:33  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:07.324    11:27:33  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:07.324    11:27:33  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:07.324    11:27:33  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:07.324    11:27:33  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:07.324     11:27:33  -- scripts/common.sh@15 -- # shopt -s extglob
00:06:07.324     11:27:33  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:07.324     11:27:33  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:07.324     11:27:33  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:07.324      11:27:33  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:07.324      11:27:33  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:07.324      11:27:33  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:07.324      11:27:33  -- paths/export.sh@5 -- # export PATH
00:06:07.324      11:27:33  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:07.324    11:27:33  -- nvmf/common.sh@51 -- # : 0
00:06:07.324    11:27:33  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:07.324    11:27:33  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:07.324    11:27:33  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:07.324    11:27:33  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:07.324    11:27:33  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:07.324    11:27:33  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:07.324  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:07.324    11:27:33  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:07.584    11:27:33  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:07.584    11:27:33  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:07.584   11:27:33  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:06:07.584    11:27:33  -- spdk/autotest.sh@32 -- # uname -s
00:06:07.584   11:27:33  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:06:07.584   11:27:33  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:06:07.584   11:27:33  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:06:07.584   11:27:33  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:06:07.584   11:27:33  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:06:07.584   11:27:33  -- spdk/autotest.sh@44 -- # modprobe nbd
00:06:07.584    11:27:33  -- spdk/autotest.sh@46 -- # type -P udevadm
00:06:07.584   11:27:33  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:06:07.584   11:27:33  -- spdk/autotest.sh@48 -- # udevadm_pid=66929
00:06:07.584   11:27:33  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:06:07.584   11:27:33  -- pm/common@17 -- # local monitor
00:06:07.584   11:27:33  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:06:07.584   11:27:33  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:06:07.584   11:27:33  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:06:07.584   11:27:33  -- pm/common@25 -- # sleep 1
00:06:07.584    11:27:33  -- pm/common@21 -- # date +%s
00:06:07.584    11:27:33  -- pm/common@21 -- # date +%s
00:06:07.584   11:27:33  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734348453
00:06:07.584   11:27:33  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734348453
00:06:07.584  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734348453_collect-vmstat.pm.log
00:06:07.584  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734348453_collect-cpu-load.pm.log
00:06:08.524   11:27:34  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:06:08.524   11:27:34  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:06:08.524   11:27:34  -- common/autotest_common.sh@724 -- # xtrace_disable
00:06:08.524   11:27:34  -- common/autotest_common.sh@10 -- # set +x
00:06:08.524   11:27:34  -- spdk/autotest.sh@59 -- # create_test_list
00:06:08.524   11:27:34  -- common/autotest_common.sh@748 -- # xtrace_disable
00:06:08.524   11:27:34  -- common/autotest_common.sh@10 -- # set +x
00:06:08.524     11:27:34  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:06:08.524    11:27:34  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:06:08.524   11:27:34  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:06:08.524   11:27:34  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:06:08.524   11:27:34  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:06:08.524   11:27:34  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:06:08.524    11:27:34  -- common/autotest_common.sh@1455 -- # uname
00:06:08.524   11:27:34  -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']'
00:06:08.524   11:27:34  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:06:08.524    11:27:34  -- common/autotest_common.sh@1475 -- # uname
00:06:08.524   11:27:34  -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]]
00:06:08.524   11:27:34  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:06:08.524   11:27:34  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:06:08.784  lcov: LCOV version 1.15
00:06:08.784   11:27:34  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:06:23.728  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:06:23.728  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:06:41.820   11:28:05  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:06:41.820   11:28:05  -- common/autotest_common.sh@724 -- # xtrace_disable
00:06:41.820   11:28:05  -- common/autotest_common.sh@10 -- # set +x
00:06:41.820   11:28:05  -- spdk/autotest.sh@78 -- # rm -f
00:06:41.820   11:28:05  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:41.820  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:41.820  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:06:41.820  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:06:41.820   11:28:06  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:06:41.820   11:28:06  -- common/autotest_common.sh@1655 -- # zoned_devs=()
00:06:41.820   11:28:06  -- common/autotest_common.sh@1655 -- # local -gA zoned_devs
00:06:41.820   11:28:06  -- common/autotest_common.sh@1656 -- # local nvme bdf
00:06:41.820   11:28:06  -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme*
00:06:41.820   11:28:06  -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1
00:06:41.820   11:28:06  -- common/autotest_common.sh@1648 -- # local device=nvme0n1
00:06:41.820   11:28:06  -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:41.820   11:28:06  -- common/autotest_common.sh@1651 -- # [[ none != none ]]
00:06:41.820   11:28:06  -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme*
00:06:41.820   11:28:06  -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1
00:06:41.820   11:28:06  -- common/autotest_common.sh@1648 -- # local device=nvme1n1
00:06:41.820   11:28:06  -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:06:41.820   11:28:06  -- common/autotest_common.sh@1651 -- # [[ none != none ]]
00:06:41.820   11:28:06  -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme*
00:06:41.820   11:28:06  -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2
00:06:41.820   11:28:06  -- common/autotest_common.sh@1648 -- # local device=nvme1n2
00:06:41.820   11:28:06  -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]]
00:06:41.820   11:28:06  -- common/autotest_common.sh@1651 -- # [[ none != none ]]
00:06:41.820   11:28:06  -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme*
00:06:41.820   11:28:06  -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3
00:06:41.820   11:28:06  -- common/autotest_common.sh@1648 -- # local device=nvme1n3
00:06:41.820   11:28:06  -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]]
00:06:41.820   11:28:06  -- common/autotest_common.sh@1651 -- # [[ none != none ]]
00:06:41.820   11:28:06  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:06:41.820   11:28:06  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:41.820   11:28:06  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:41.820   11:28:06  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:06:41.820   11:28:06  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:06:41.820   11:28:06  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:06:41.820  No valid GPT data, bailing
00:06:41.820    11:28:06  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:06:41.820   11:28:06  -- scripts/common.sh@394 -- # pt=
00:06:41.820   11:28:06  -- scripts/common.sh@395 -- # return 1
00:06:41.820   11:28:06  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:06:41.820  1+0 records in
00:06:41.820  1+0 records out
00:06:41.820  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612095 s, 171 MB/s
00:06:41.820   11:28:06  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:41.820   11:28:06  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:41.820   11:28:06  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1
00:06:41.820   11:28:06  -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt
00:06:41.820   11:28:06  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:06:41.820  No valid GPT data, bailing
00:06:41.820    11:28:06  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:06:41.820   11:28:06  -- scripts/common.sh@394 -- # pt=
00:06:41.820   11:28:06  -- scripts/common.sh@395 -- # return 1
00:06:41.820   11:28:06  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:06:41.820  1+0 records in
00:06:41.820  1+0 records out
00:06:41.820  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647891 s, 162 MB/s
00:06:41.820   11:28:06  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:41.820   11:28:06  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:41.820   11:28:06  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2
00:06:41.820   11:28:06  -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt
00:06:41.820   11:28:06  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2
00:06:41.820  No valid GPT data, bailing
00:06:41.820    11:28:06  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2
00:06:41.820   11:28:06  -- scripts/common.sh@394 -- # pt=
00:06:41.820   11:28:06  -- scripts/common.sh@395 -- # return 1
00:06:41.820   11:28:06  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1
00:06:41.820  1+0 records in
00:06:41.820  1+0 records out
00:06:41.820  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618417 s, 170 MB/s
00:06:41.820   11:28:06  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:41.820   11:28:06  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:41.820   11:28:06  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3
00:06:41.820   11:28:06  -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt
00:06:41.820   11:28:06  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3
00:06:41.820  No valid GPT data, bailing
00:06:41.820    11:28:06  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3
00:06:41.820   11:28:06  -- scripts/common.sh@394 -- # pt=
00:06:41.820   11:28:06  -- scripts/common.sh@395 -- # return 1
00:06:41.820   11:28:06  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1
00:06:41.820  1+0 records in
00:06:41.820  1+0 records out
00:06:41.820  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631741 s, 166 MB/s
00:06:41.820   11:28:06  -- spdk/autotest.sh@105 -- # sync
00:06:41.820   11:28:06  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:06:41.820   11:28:06  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:06:41.820    11:28:06  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:06:42.761    11:28:08  -- spdk/autotest.sh@111 -- # uname -s
00:06:42.761   11:28:08  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:06:42.761   11:28:08  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:06:42.761   11:28:08  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:06:43.328  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:43.328  Hugepages
00:06:43.328  node     hugesize     free /  total
00:06:43.328  node0   1048576kB        0 /      0
00:06:43.328  node0      2048kB        0 /      0
00:06:43.328  
00:06:43.328  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:43.328  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:06:43.328  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:06:43.587  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1 nvme1n2 nvme1n3
00:06:43.587    11:28:09  -- spdk/autotest.sh@117 -- # uname -s
00:06:43.587   11:28:09  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:06:43.587   11:28:09  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:06:43.587   11:28:09  -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:44.522  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:44.522  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:44.522  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:44.522   11:28:10  -- common/autotest_common.sh@1515 -- # sleep 1
00:06:45.455   11:28:11  -- common/autotest_common.sh@1516 -- # bdfs=()
00:06:45.455   11:28:11  -- common/autotest_common.sh@1516 -- # local bdfs
00:06:45.455   11:28:11  -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs))
00:06:45.455    11:28:11  -- common/autotest_common.sh@1518 -- # get_nvme_bdfs
00:06:45.455    11:28:11  -- common/autotest_common.sh@1496 -- # bdfs=()
00:06:45.455    11:28:11  -- common/autotest_common.sh@1496 -- # local bdfs
00:06:45.455    11:28:11  -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:45.456     11:28:11  -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:45.456     11:28:11  -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr'
00:06:45.714    11:28:11  -- common/autotest_common.sh@1498 -- # (( 2 == 0 ))
00:06:45.714    11:28:11  -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:06:45.714   11:28:11  -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:45.973  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:46.233  Waiting for block devices as requested
00:06:46.233  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:06:46.233  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:46.491   11:28:12  -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}"
00:06:46.491    11:28:12  -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:06:46.491     11:28:12  -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:06:46.491     11:28:12  -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme
00:06:46.491    11:28:12  -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:06:46.491    11:28:12  -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]]
00:06:46.491     11:28:12  -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:06:46.491    11:28:12  -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1
00:06:46.491   11:28:12  -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1
00:06:46.491   11:28:12  -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]]
00:06:46.491    11:28:12  -- common/autotest_common.sh@1529 -- # cut -d: -f2
00:06:46.491    11:28:12  -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1
00:06:46.491    11:28:12  -- common/autotest_common.sh@1529 -- # grep oacs
00:06:46.491   11:28:12  -- common/autotest_common.sh@1529 -- # oacs=' 0x12a'
00:06:46.491   11:28:12  -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8
00:06:46.491   11:28:12  -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]]
00:06:46.491    11:28:12  -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1
00:06:46.491    11:28:12  -- common/autotest_common.sh@1538 -- # grep unvmcap
00:06:46.491    11:28:12  -- common/autotest_common.sh@1538 -- # cut -d: -f2
00:06:46.491   11:28:12  -- common/autotest_common.sh@1538 -- # unvmcap=' 0'
00:06:46.491   11:28:12  -- common/autotest_common.sh@1539 -- # [[  0 -eq 0 ]]
00:06:46.491   11:28:12  -- common/autotest_common.sh@1541 -- # continue
00:06:46.491   11:28:12  -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}"
00:06:46.491    11:28:12  -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0
00:06:46.491     11:28:12  -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1
00:06:46.491     11:28:12  -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme
00:06:46.491    11:28:12  -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:06:46.491    11:28:12  -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]]
00:06:46.491     11:28:12  -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:06:46.491    11:28:12  -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0
00:06:46.491   11:28:12  -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0
00:06:46.491   11:28:12  -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]]
00:06:46.491    11:28:12  -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0
00:06:46.491    11:28:12  -- common/autotest_common.sh@1529 -- # grep oacs
00:06:46.491    11:28:12  -- common/autotest_common.sh@1529 -- # cut -d: -f2
00:06:46.491   11:28:12  -- common/autotest_common.sh@1529 -- # oacs=' 0x12a'
00:06:46.491   11:28:12  -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8
00:06:46.491   11:28:12  -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]]
00:06:46.491    11:28:12  -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0
00:06:46.491    11:28:12  -- common/autotest_common.sh@1538 -- # cut -d: -f2
00:06:46.492    11:28:12  -- common/autotest_common.sh@1538 -- # grep unvmcap
00:06:46.492   11:28:12  -- common/autotest_common.sh@1538 -- # unvmcap=' 0'
00:06:46.492   11:28:12  -- common/autotest_common.sh@1539 -- # [[  0 -eq 0 ]]
00:06:46.492   11:28:12  -- common/autotest_common.sh@1541 -- # continue
00:06:46.492   11:28:12  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:06:46.492   11:28:12  -- common/autotest_common.sh@730 -- # xtrace_disable
00:06:46.492   11:28:12  -- common/autotest_common.sh@10 -- # set +x
00:06:46.492   11:28:12  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:06:46.492   11:28:12  -- common/autotest_common.sh@724 -- # xtrace_disable
00:06:46.492   11:28:12  -- common/autotest_common.sh@10 -- # set +x
00:06:46.492   11:28:12  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:47.426  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:47.426  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:47.426  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:47.426   11:28:13  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:47.426   11:28:13  -- common/autotest_common.sh@730 -- # xtrace_disable
00:06:47.426   11:28:13  -- common/autotest_common.sh@10 -- # set +x
00:06:47.684   11:28:13  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:47.684   11:28:13  -- common/autotest_common.sh@1576 -- # mapfile -t bdfs
00:06:47.684    11:28:13  -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54
00:06:47.684    11:28:13  -- common/autotest_common.sh@1561 -- # bdfs=()
00:06:47.684    11:28:13  -- common/autotest_common.sh@1561 -- # _bdfs=()
00:06:47.684    11:28:13  -- common/autotest_common.sh@1561 -- # local bdfs _bdfs
00:06:47.684    11:28:13  -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs))
00:06:47.684     11:28:13  -- common/autotest_common.sh@1562 -- # get_nvme_bdfs
00:06:47.684     11:28:13  -- common/autotest_common.sh@1496 -- # bdfs=()
00:06:47.684     11:28:13  -- common/autotest_common.sh@1496 -- # local bdfs
00:06:47.684     11:28:13  -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:47.684      11:28:13  -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:47.684      11:28:13  -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr'
00:06:47.684     11:28:13  -- common/autotest_common.sh@1498 -- # (( 2 == 0 ))
00:06:47.684     11:28:13  -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0
00:06:47.684    11:28:13  -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}"
00:06:47.684     11:28:13  -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:06:47.684    11:28:13  -- common/autotest_common.sh@1564 -- # device=0x0010
00:06:47.684    11:28:13  -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:47.684    11:28:13  -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}"
00:06:47.684     11:28:13  -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device
00:06:47.684    11:28:13  -- common/autotest_common.sh@1564 -- # device=0x0010
00:06:47.684    11:28:13  -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:47.684    11:28:13  -- common/autotest_common.sh@1570 -- # (( 0 > 0 ))
00:06:47.684    11:28:13  -- common/autotest_common.sh@1570 -- # return 0
00:06:47.684   11:28:13  -- common/autotest_common.sh@1577 -- # [[ -z '' ]]
00:06:47.684   11:28:13  -- common/autotest_common.sh@1578 -- # return 0
00:06:47.684   11:28:13  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:06:47.684   11:28:13  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:06:47.684   11:28:13  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:47.684   11:28:13  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:47.684   11:28:13  -- spdk/autotest.sh@149 -- # timing_enter lib
00:06:47.684   11:28:13  -- common/autotest_common.sh@724 -- # xtrace_disable
00:06:47.684   11:28:13  -- common/autotest_common.sh@10 -- # set +x
00:06:47.684   11:28:13  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:06:47.684   11:28:13  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:47.684   11:28:13  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:47.684   11:28:13  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:47.684   11:28:13  -- common/autotest_common.sh@10 -- # set +x
00:06:47.684  ************************************
00:06:47.684  START TEST env
00:06:47.684  ************************************
00:06:47.684   11:28:13 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:47.684  * Looking for test storage...
00:06:47.943  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:06:47.943    11:28:13 env -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:06:47.943     11:28:13 env -- common/autotest_common.sh@1681 -- # lcov --version
00:06:47.943     11:28:13 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:06:47.943    11:28:13 env -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:06:47.943    11:28:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:47.943    11:28:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:47.943    11:28:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:47.943    11:28:13 env -- scripts/common.sh@336 -- # IFS=.-:
00:06:47.943    11:28:13 env -- scripts/common.sh@336 -- # read -ra ver1
00:06:47.943    11:28:13 env -- scripts/common.sh@337 -- # IFS=.-:
00:06:47.943    11:28:13 env -- scripts/common.sh@337 -- # read -ra ver2
00:06:47.943    11:28:13 env -- scripts/common.sh@338 -- # local 'op=<'
00:06:47.943    11:28:13 env -- scripts/common.sh@340 -- # ver1_l=2
00:06:47.943    11:28:13 env -- scripts/common.sh@341 -- # ver2_l=1
00:06:47.943    11:28:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:47.943    11:28:13 env -- scripts/common.sh@344 -- # case "$op" in
00:06:47.943    11:28:13 env -- scripts/common.sh@345 -- # : 1
00:06:47.943    11:28:13 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:47.943    11:28:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:47.943     11:28:13 env -- scripts/common.sh@365 -- # decimal 1
00:06:47.943     11:28:13 env -- scripts/common.sh@353 -- # local d=1
00:06:47.943     11:28:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:47.943     11:28:13 env -- scripts/common.sh@355 -- # echo 1
00:06:47.943    11:28:13 env -- scripts/common.sh@365 -- # ver1[v]=1
00:06:47.943     11:28:13 env -- scripts/common.sh@366 -- # decimal 2
00:06:47.943     11:28:13 env -- scripts/common.sh@353 -- # local d=2
00:06:47.943     11:28:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:47.943     11:28:13 env -- scripts/common.sh@355 -- # echo 2
00:06:47.943    11:28:13 env -- scripts/common.sh@366 -- # ver2[v]=2
00:06:47.943    11:28:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:47.943    11:28:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:47.943    11:28:13 env -- scripts/common.sh@368 -- # return 0
00:06:47.943    11:28:13 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:47.943    11:28:13 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:06:47.943  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.943  		--rc genhtml_branch_coverage=1
00:06:47.943  		--rc genhtml_function_coverage=1
00:06:47.943  		--rc genhtml_legend=1
00:06:47.943  		--rc geninfo_all_blocks=1
00:06:47.943  		--rc geninfo_unexecuted_blocks=1
00:06:47.943  		
00:06:47.943  		'
00:06:47.943    11:28:13 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:06:47.943  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.943  		--rc genhtml_branch_coverage=1
00:06:47.943  		--rc genhtml_function_coverage=1
00:06:47.943  		--rc genhtml_legend=1
00:06:47.943  		--rc geninfo_all_blocks=1
00:06:47.943  		--rc geninfo_unexecuted_blocks=1
00:06:47.943  		
00:06:47.943  		'
00:06:47.943    11:28:13 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:06:47.943  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.943  		--rc genhtml_branch_coverage=1
00:06:47.943  		--rc genhtml_function_coverage=1
00:06:47.943  		--rc genhtml_legend=1
00:06:47.943  		--rc geninfo_all_blocks=1
00:06:47.943  		--rc geninfo_unexecuted_blocks=1
00:06:47.943  		
00:06:47.943  		'
00:06:47.943    11:28:13 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:06:47.943  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:47.943  		--rc genhtml_branch_coverage=1
00:06:47.943  		--rc genhtml_function_coverage=1
00:06:47.943  		--rc genhtml_legend=1
00:06:47.943  		--rc geninfo_all_blocks=1
00:06:47.943  		--rc geninfo_unexecuted_blocks=1
00:06:47.943  		
00:06:47.943  		'
00:06:47.943   11:28:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:47.943   11:28:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:47.943   11:28:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:47.943   11:28:13 env -- common/autotest_common.sh@10 -- # set +x
00:06:47.943  ************************************
00:06:47.943  START TEST env_memory
00:06:47.943  ************************************
00:06:47.943   11:28:13 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:47.943  
00:06:47.943  
00:06:47.943       CUnit - A unit testing framework for C - Version 2.1-3
00:06:47.943       http://cunit.sourceforge.net/
00:06:47.943  
00:06:47.943  
00:06:47.943  Suite: memory
00:06:47.943    Test: alloc and free memory map ...[2024-12-16 11:28:13.963861] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:06:48.202  passed
00:06:48.202    Test: mem map translation ...[2024-12-16 11:28:14.019256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:06:48.202  [2024-12-16 11:28:14.019348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:06:48.203  [2024-12-16 11:28:14.019428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:06:48.203  [2024-12-16 11:28:14.019454] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:06:48.203  passed
00:06:48.203    Test: mem map registration ...[2024-12-16 11:28:14.103672] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:06:48.203  [2024-12-16 11:28:14.103733] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:06:48.203  passed
00:06:48.203    Test: mem map adjacent registrations ...passed
00:06:48.203  
00:06:48.203  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:48.203                suites      1      1    n/a      0        0
00:06:48.203                 tests      4      4      4      0        0
00:06:48.203               asserts    152    152    152      0      n/a
00:06:48.203  
00:06:48.203  Elapsed time =    0.283 seconds
00:06:48.203  
00:06:48.203  real	0m0.343s
00:06:48.203  user	0m0.296s
00:06:48.203  sys	0m0.035s
00:06:48.203   11:28:14 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:48.203   11:28:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:06:48.203  ************************************
00:06:48.203  END TEST env_memory
00:06:48.203  ************************************
00:06:48.461   11:28:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:48.461   11:28:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:48.461   11:28:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:48.461   11:28:14 env -- common/autotest_common.sh@10 -- # set +x
00:06:48.461  ************************************
00:06:48.461  START TEST env_vtophys
00:06:48.461  ************************************
00:06:48.461   11:28:14 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:48.461  EAL: lib.eal log level changed from notice to debug
00:06:48.461  EAL: Detected lcore 0 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 1 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 2 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 3 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 4 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 5 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 6 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 7 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 8 as core 0 on socket 0
00:06:48.461  EAL: Detected lcore 9 as core 0 on socket 0
00:06:48.461  EAL: Maximum logical cores by configuration: 128
00:06:48.461  EAL: Detected CPU lcores: 10
00:06:48.461  EAL: Detected NUMA nodes: 1
00:06:48.461  EAL: Checking presence of .so 'librte_eal.so.24.0'
00:06:48.461  EAL: Detected shared linkage of DPDK
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0
00:06:48.461  EAL: Registered [vdev] bus.
00:06:48.461  EAL: bus.vdev log level changed from disabled to notice
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0
00:06:48.461  EAL: pmd.net.i40e.init log level changed from disabled to notice
00:06:48.461  EAL: pmd.net.i40e.driver log level changed from disabled to notice
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:06:48.461  EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:06:48.461  EAL: No shared files mode enabled, IPC will be disabled
00:06:48.461  EAL: No shared files mode enabled, IPC is disabled
00:06:48.461  EAL: Selected IOVA mode 'PA'
00:06:48.461  EAL: Probing VFIO support...
00:06:48.461  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:48.461  EAL: VFIO modules not loaded, skipping VFIO support...
00:06:48.461  EAL: Ask a virtual area of 0x2e000 bytes
00:06:48.461  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:06:48.461  EAL: Setting up physically contiguous memory...
00:06:48.461  EAL: Setting maximum number of open files to 524288
00:06:48.461  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:06:48.461  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:06:48.461  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.461  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:06:48.461  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.461  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.461  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:06:48.461  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:06:48.461  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.461  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:06:48.461  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.461  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.462  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:06:48.462  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:06:48.462  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.462  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:06:48.462  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.462  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.462  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:06:48.462  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:06:48.462  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.462  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:06:48.462  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.462  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.462  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:06:48.462  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:06:48.462  EAL: Hugepages will be freed exactly as allocated.
00:06:48.462  EAL: No shared files mode enabled, IPC is disabled
00:06:48.462  EAL: No shared files mode enabled, IPC is disabled
00:06:48.462  EAL: TSC frequency is ~2290000 KHz
00:06:48.462  EAL: Main lcore 0 is ready (tid=7f43d5610a40;cpuset=[0])
00:06:48.462  EAL: Trying to obtain current memory policy.
00:06:48.462  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:48.462  EAL: Restoring previous memory policy: 0
00:06:48.462  EAL: request: mp_malloc_sync
00:06:48.462  EAL: No shared files mode enabled, IPC is disabled
00:06:48.462  EAL: Heap on socket 0 was expanded by 2MB
00:06:48.462  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:48.462  EAL: No shared files mode enabled, IPC is disabled
00:06:48.462  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:06:48.462  EAL: Mem event callback 'spdk:(nil)' registered
00:06:48.462  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:06:48.462  
00:06:48.462  
00:06:48.462       CUnit - A unit testing framework for C - Version 2.1-3
00:06:48.462       http://cunit.sourceforge.net/
00:06:48.462  
00:06:48.462  
00:06:48.462  Suite: components_suite
00:06:49.043    Test: vtophys_malloc_test ...passed
00:06:49.043    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 4MB
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was shrunk by 4MB
00:06:49.043  EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 6MB
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was shrunk by 6MB
00:06:49.043  EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 10MB
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was shrunk by 10MB
00:06:49.043  EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 18MB
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was shrunk by 18MB
00:06:49.043  EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 34MB
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was shrunk by 34MB
00:06:49.043  EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 66MB
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was shrunk by 66MB
00:06:49.043  EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 130MB
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was shrunk by 130MB
00:06:49.043  EAL: Trying to obtain current memory policy.
00:06:49.043  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.043  EAL: Restoring previous memory policy: 4
00:06:49.043  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.043  EAL: request: mp_malloc_sync
00:06:49.043  EAL: No shared files mode enabled, IPC is disabled
00:06:49.043  EAL: Heap on socket 0 was expanded by 258MB
00:06:49.326  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.327  EAL: request: mp_malloc_sync
00:06:49.327  EAL: No shared files mode enabled, IPC is disabled
00:06:49.327  EAL: Heap on socket 0 was shrunk by 258MB
00:06:49.327  EAL: Trying to obtain current memory policy.
00:06:49.327  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.327  EAL: Restoring previous memory policy: 4
00:06:49.327  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.327  EAL: request: mp_malloc_sync
00:06:49.327  EAL: No shared files mode enabled, IPC is disabled
00:06:49.327  EAL: Heap on socket 0 was expanded by 514MB
00:06:49.327  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.586  EAL: request: mp_malloc_sync
00:06:49.586  EAL: No shared files mode enabled, IPC is disabled
00:06:49.586  EAL: Heap on socket 0 was shrunk by 514MB
00:06:49.586  EAL: Trying to obtain current memory policy.
00:06:49.586  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.844  EAL: Restoring previous memory policy: 4
00:06:49.844  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.844  EAL: request: mp_malloc_sync
00:06:49.844  EAL: No shared files mode enabled, IPC is disabled
00:06:49.844  EAL: Heap on socket 0 was expanded by 1026MB
00:06:49.845  EAL: Calling mem event callback 'spdk:(nil)'
00:06:50.103  passed
00:06:50.103  
00:06:50.103  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:50.103                suites      1      1    n/a      0        0
00:06:50.103                 tests      2      2      2      0        0
00:06:50.103               asserts   5274   5274   5274      0      n/a
00:06:50.103  
00:06:50.103  Elapsed time =    1.503 secondsEAL: request: mp_malloc_sync
00:06:50.103  EAL: No shared files mode enabled, IPC is disabled
00:06:50.103  EAL: Heap on socket 0 was shrunk by 1026MB
00:06:50.103  
00:06:50.103  EAL: Calling mem event callback 'spdk:(nil)'
00:06:50.103  EAL: request: mp_malloc_sync
00:06:50.103  EAL: No shared files mode enabled, IPC is disabled
00:06:50.103  EAL: Heap on socket 0 was shrunk by 2MB
00:06:50.103  EAL: No shared files mode enabled, IPC is disabled
00:06:50.103  EAL: No shared files mode enabled, IPC is disabled
00:06:50.103  EAL: No shared files mode enabled, IPC is disabled
00:06:50.103  
00:06:50.103  real	0m1.771s
00:06:50.103  user	0m0.795s
00:06:50.103  sys	0m0.840s
00:06:50.103   11:28:16 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:50.103   11:28:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:06:50.103  ************************************
00:06:50.103  END TEST env_vtophys
00:06:50.103  ************************************
00:06:50.103   11:28:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:50.103   11:28:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:50.103   11:28:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:50.103   11:28:16 env -- common/autotest_common.sh@10 -- # set +x
00:06:50.103  ************************************
00:06:50.103  START TEST env_pci
00:06:50.103  ************************************
00:06:50.103   11:28:16 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:50.103  
00:06:50.103  
00:06:50.103       CUnit - A unit testing framework for C - Version 2.1-3
00:06:50.103       http://cunit.sourceforge.net/
00:06:50.103  
00:06:50.103  
00:06:50.103  Suite: pci
00:06:50.103    Test: pci_hook ...[2024-12-16 11:28:16.145842] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69188 has claimed it
00:06:50.361  EAL: Cannot find device (10000:00:01.0)
00:06:50.361  EAL: Failed to attach device on primary process
00:06:50.361  passed
00:06:50.361  
00:06:50.361  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:50.361                suites      1      1    n/a      0        0
00:06:50.361                 tests      1      1      1      0        0
00:06:50.361               asserts     25     25     25      0      n/a
00:06:50.361  
00:06:50.361  Elapsed time =    0.007 seconds
00:06:50.361  
00:06:50.361  real	0m0.096s
00:06:50.361  user	0m0.043s
00:06:50.361  sys	0m0.051s
00:06:50.361   11:28:16 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:50.361   11:28:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:06:50.361  ************************************
00:06:50.361  END TEST env_pci
00:06:50.361  ************************************
00:06:50.361   11:28:16 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:06:50.361    11:28:16 env -- env/env.sh@15 -- # uname
00:06:50.361   11:28:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:06:50.361   11:28:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:06:50.361   11:28:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:50.361   11:28:16 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:06:50.361   11:28:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:50.361   11:28:16 env -- common/autotest_common.sh@10 -- # set +x
00:06:50.361  ************************************
00:06:50.361  START TEST env_dpdk_post_init
00:06:50.361  ************************************
00:06:50.361   11:28:16 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:50.361  EAL: Detected CPU lcores: 10
00:06:50.361  EAL: Detected NUMA nodes: 1
00:06:50.361  EAL: Detected shared linkage of DPDK
00:06:50.361  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:50.361  EAL: Selected IOVA mode 'PA'
00:06:50.619  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:50.619  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:06:50.619  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1)
00:06:50.619  Starting DPDK initialization...
00:06:50.619  Starting SPDK post initialization...
00:06:50.619  SPDK NVMe probe
00:06:50.619  Attaching to 0000:00:10.0
00:06:50.619  Attaching to 0000:00:11.0
00:06:50.619  Attached to 0000:00:10.0
00:06:50.619  Attached to 0000:00:11.0
00:06:50.619  Cleaning up...
00:06:50.619  
00:06:50.619  real	0m0.260s
00:06:50.619  user	0m0.078s
00:06:50.619  sys	0m0.082s
00:06:50.619   11:28:16 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:50.619   11:28:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:06:50.619  ************************************
00:06:50.619  END TEST env_dpdk_post_init
00:06:50.619  ************************************
00:06:50.619    11:28:16 env -- env/env.sh@26 -- # uname
00:06:50.619   11:28:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:06:50.619   11:28:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:50.619   11:28:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:50.619   11:28:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:50.619   11:28:16 env -- common/autotest_common.sh@10 -- # set +x
00:06:50.619  ************************************
00:06:50.619  START TEST env_mem_callbacks
00:06:50.619  ************************************
00:06:50.619   11:28:16 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:50.619  EAL: Detected CPU lcores: 10
00:06:50.619  EAL: Detected NUMA nodes: 1
00:06:50.619  EAL: Detected shared linkage of DPDK
00:06:50.619  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:50.619  EAL: Selected IOVA mode 'PA'
00:06:50.877  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:50.877  
00:06:50.877  
00:06:50.877       CUnit - A unit testing framework for C - Version 2.1-3
00:06:50.877       http://cunit.sourceforge.net/
00:06:50.877  
00:06:50.877  
00:06:50.877  Suite: memory
00:06:50.877    Test: test ...
00:06:50.877  register 0x200000200000 2097152
00:06:50.877  malloc 3145728
00:06:50.877  register 0x200000400000 4194304
00:06:50.877  buf 0x200000500000 len 3145728 PASSED
00:06:50.877  malloc 64
00:06:50.877  buf 0x2000004fff40 len 64 PASSED
00:06:50.877  malloc 4194304
00:06:50.877  register 0x200000800000 6291456
00:06:50.877  buf 0x200000a00000 len 4194304 PASSED
00:06:50.877  free 0x200000500000 3145728
00:06:50.877  free 0x2000004fff40 64
00:06:50.877  unregister 0x200000400000 4194304 PASSED
00:06:50.877  free 0x200000a00000 4194304
00:06:50.877  unregister 0x200000800000 6291456 PASSED
00:06:50.877  malloc 8388608
00:06:50.877  register 0x200000400000 10485760
00:06:50.877  buf 0x200000600000 len 8388608 PASSED
00:06:50.877  free 0x200000600000 8388608
00:06:50.877  unregister 0x200000400000 10485760 PASSED
00:06:50.877  passed
00:06:50.877  
00:06:50.877  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:50.877                suites      1      1    n/a      0        0
00:06:50.877                 tests      1      1      1      0        0
00:06:50.877               asserts     15     15     15      0      n/a
00:06:50.877  
00:06:50.877  Elapsed time =    0.010 seconds
00:06:50.877  
00:06:50.877  real	0m0.202s
00:06:50.877  user	0m0.031s
00:06:50.877  sys	0m0.069s
00:06:50.877   11:28:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:50.877   11:28:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:06:50.877  ************************************
00:06:50.877  END TEST env_mem_callbacks
00:06:50.877  ************************************
00:06:50.877  
00:06:50.877  real	0m3.251s
00:06:50.877  user	0m1.474s
00:06:50.877  sys	0m1.446s
00:06:50.877   11:28:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:50.877   11:28:16 env -- common/autotest_common.sh@10 -- # set +x
00:06:50.877  ************************************
00:06:50.877  END TEST env
00:06:50.877  ************************************
00:06:50.877   11:28:16  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:50.877   11:28:16  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:50.877   11:28:16  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:50.877   11:28:16  -- common/autotest_common.sh@10 -- # set +x
00:06:50.877  ************************************
00:06:50.877  START TEST rpc
00:06:50.878  ************************************
00:06:50.878   11:28:16 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:51.136  * Looking for test storage...
00:06:51.136  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:51.136    11:28:17 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:06:51.136     11:28:17 rpc -- common/autotest_common.sh@1681 -- # lcov --version
00:06:51.136     11:28:17 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:06:51.136    11:28:17 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:06:51.137    11:28:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:51.137    11:28:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:51.137    11:28:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:51.137    11:28:17 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:51.137    11:28:17 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:51.137    11:28:17 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:51.137    11:28:17 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:51.137    11:28:17 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:51.137    11:28:17 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:51.137    11:28:17 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:51.137    11:28:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:51.137    11:28:17 rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:51.137    11:28:17 rpc -- scripts/common.sh@345 -- # : 1
00:06:51.137    11:28:17 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:51.137    11:28:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:51.137     11:28:17 rpc -- scripts/common.sh@365 -- # decimal 1
00:06:51.137     11:28:17 rpc -- scripts/common.sh@353 -- # local d=1
00:06:51.137     11:28:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:51.137     11:28:17 rpc -- scripts/common.sh@355 -- # echo 1
00:06:51.137    11:28:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:51.137     11:28:17 rpc -- scripts/common.sh@366 -- # decimal 2
00:06:51.137     11:28:17 rpc -- scripts/common.sh@353 -- # local d=2
00:06:51.137     11:28:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:51.137     11:28:17 rpc -- scripts/common.sh@355 -- # echo 2
00:06:51.137    11:28:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:51.137    11:28:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:51.137    11:28:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:51.137    11:28:17 rpc -- scripts/common.sh@368 -- # return 0
00:06:51.137    11:28:17 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:51.137    11:28:17 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:06:51.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.137  		--rc genhtml_branch_coverage=1
00:06:51.137  		--rc genhtml_function_coverage=1
00:06:51.137  		--rc genhtml_legend=1
00:06:51.137  		--rc geninfo_all_blocks=1
00:06:51.137  		--rc geninfo_unexecuted_blocks=1
00:06:51.137  		
00:06:51.137  		'
00:06:51.137    11:28:17 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:06:51.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.137  		--rc genhtml_branch_coverage=1
00:06:51.137  		--rc genhtml_function_coverage=1
00:06:51.137  		--rc genhtml_legend=1
00:06:51.137  		--rc geninfo_all_blocks=1
00:06:51.137  		--rc geninfo_unexecuted_blocks=1
00:06:51.137  		
00:06:51.137  		'
00:06:51.137    11:28:17 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:06:51.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.137  		--rc genhtml_branch_coverage=1
00:06:51.137  		--rc genhtml_function_coverage=1
00:06:51.137  		--rc genhtml_legend=1
00:06:51.137  		--rc geninfo_all_blocks=1
00:06:51.137  		--rc geninfo_unexecuted_blocks=1
00:06:51.137  		
00:06:51.137  		'
00:06:51.137    11:28:17 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:06:51.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.137  		--rc genhtml_branch_coverage=1
00:06:51.137  		--rc genhtml_function_coverage=1
00:06:51.137  		--rc genhtml_legend=1
00:06:51.137  		--rc geninfo_all_blocks=1
00:06:51.137  		--rc geninfo_unexecuted_blocks=1
00:06:51.137  		
00:06:51.137  		'
00:06:51.137   11:28:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69315
00:06:51.137   11:28:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:06:51.137   11:28:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:51.137   11:28:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69315
00:06:51.137   11:28:17 rpc -- common/autotest_common.sh@831 -- # '[' -z 69315 ']'
00:06:51.137   11:28:17 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:51.137   11:28:17 rpc -- common/autotest_common.sh@836 -- # local max_retries=100
00:06:51.137   11:28:17 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:51.137  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:51.137   11:28:17 rpc -- common/autotest_common.sh@840 -- # xtrace_disable
00:06:51.137   11:28:17 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:51.396  [2024-12-16 11:28:17.265279] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:06:51.396  [2024-12-16 11:28:17.265420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69315 ]
00:06:51.396  [2024-12-16 11:28:17.429722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:51.654  [2024-12-16 11:28:17.484490] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:06:51.654  [2024-12-16 11:28:17.484563] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69315' to capture a snapshot of events at runtime.
00:06:51.654  [2024-12-16 11:28:17.484576] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:51.654  [2024-12-16 11:28:17.484595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:51.654  [2024-12-16 11:28:17.484607] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69315 for offline analysis/debug.
00:06:51.654  [2024-12-16 11:28:17.484648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:06:52.219   11:28:18 rpc -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:06:52.219   11:28:18 rpc -- common/autotest_common.sh@864 -- # return 0
00:06:52.219   11:28:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:52.219   11:28:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:52.219   11:28:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:06:52.219   11:28:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:06:52.219   11:28:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:52.219   11:28:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:52.219   11:28:18 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:52.219  ************************************
00:06:52.219  START TEST rpc_integrity
00:06:52.219  ************************************
00:06:52.219   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity
00:06:52.219    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.219   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:52.219    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:52.219   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:52.219    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.219   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:06:52.219    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.219   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:52.219  {
00:06:52.219  "name": "Malloc0",
00:06:52.219  "aliases": [
00:06:52.219  "425c70a6-62f1-4f2d-bfc7-7d3043ee8f3b"
00:06:52.219  ],
00:06:52.219  "product_name": "Malloc disk",
00:06:52.219  "block_size": 512,
00:06:52.219  "num_blocks": 16384,
00:06:52.219  "uuid": "425c70a6-62f1-4f2d-bfc7-7d3043ee8f3b",
00:06:52.219  "assigned_rate_limits": {
00:06:52.219  "rw_ios_per_sec": 0,
00:06:52.219  "rw_mbytes_per_sec": 0,
00:06:52.219  "r_mbytes_per_sec": 0,
00:06:52.219  "w_mbytes_per_sec": 0
00:06:52.219  },
00:06:52.219  "claimed": false,
00:06:52.219  "zoned": false,
00:06:52.219  "supported_io_types": {
00:06:52.219  "read": true,
00:06:52.219  "write": true,
00:06:52.219  "unmap": true,
00:06:52.219  "flush": true,
00:06:52.219  "reset": true,
00:06:52.219  "nvme_admin": false,
00:06:52.219  "nvme_io": false,
00:06:52.219  "nvme_io_md": false,
00:06:52.219  "write_zeroes": true,
00:06:52.219  "zcopy": true,
00:06:52.219  "get_zone_info": false,
00:06:52.219  "zone_management": false,
00:06:52.219  "zone_append": false,
00:06:52.219  "compare": false,
00:06:52.219  "compare_and_write": false,
00:06:52.219  "abort": true,
00:06:52.219  "seek_hole": false,
00:06:52.219  "seek_data": false,
00:06:52.219  "copy": true,
00:06:52.219  "nvme_iov_md": false
00:06:52.219  },
00:06:52.219  "memory_domains": [
00:06:52.219  {
00:06:52.219  "dma_device_id": "system",
00:06:52.219  "dma_device_type": 1
00:06:52.219  },
00:06:52.219  {
00:06:52.219  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:52.219  "dma_device_type": 2
00:06:52.219  }
00:06:52.219  ],
00:06:52.219  "driver_specific": {}
00:06:52.219  }
00:06:52.219  ]'
00:06:52.219    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:52.219   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:52.219   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:06:52.219   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.219   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.219  [2024-12-16 11:28:18.252177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:06:52.219  [2024-12-16 11:28:18.252269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:52.219  [2024-12-16 11:28:18.252309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:06:52.219  [2024-12-16 11:28:18.252320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:52.219  [2024-12-16 11:28:18.255212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:52.219  [2024-12-16 11:28:18.255258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:52.219  Passthru0
00:06:52.219   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.219    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.219    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.478    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.478   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:52.478  {
00:06:52.478  "name": "Malloc0",
00:06:52.478  "aliases": [
00:06:52.478  "425c70a6-62f1-4f2d-bfc7-7d3043ee8f3b"
00:06:52.478  ],
00:06:52.478  "product_name": "Malloc disk",
00:06:52.478  "block_size": 512,
00:06:52.478  "num_blocks": 16384,
00:06:52.478  "uuid": "425c70a6-62f1-4f2d-bfc7-7d3043ee8f3b",
00:06:52.478  "assigned_rate_limits": {
00:06:52.478  "rw_ios_per_sec": 0,
00:06:52.478  "rw_mbytes_per_sec": 0,
00:06:52.478  "r_mbytes_per_sec": 0,
00:06:52.478  "w_mbytes_per_sec": 0
00:06:52.478  },
00:06:52.478  "claimed": true,
00:06:52.478  "claim_type": "exclusive_write",
00:06:52.478  "zoned": false,
00:06:52.478  "supported_io_types": {
00:06:52.478  "read": true,
00:06:52.478  "write": true,
00:06:52.478  "unmap": true,
00:06:52.478  "flush": true,
00:06:52.478  "reset": true,
00:06:52.478  "nvme_admin": false,
00:06:52.478  "nvme_io": false,
00:06:52.478  "nvme_io_md": false,
00:06:52.478  "write_zeroes": true,
00:06:52.478  "zcopy": true,
00:06:52.478  "get_zone_info": false,
00:06:52.478  "zone_management": false,
00:06:52.478  "zone_append": false,
00:06:52.478  "compare": false,
00:06:52.478  "compare_and_write": false,
00:06:52.478  "abort": true,
00:06:52.478  "seek_hole": false,
00:06:52.478  "seek_data": false,
00:06:52.478  "copy": true,
00:06:52.478  "nvme_iov_md": false
00:06:52.478  },
00:06:52.478  "memory_domains": [
00:06:52.478  {
00:06:52.478  "dma_device_id": "system",
00:06:52.478  "dma_device_type": 1
00:06:52.478  },
00:06:52.478  {
00:06:52.478  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:52.478  "dma_device_type": 2
00:06:52.478  }
00:06:52.478  ],
00:06:52.478  "driver_specific": {}
00:06:52.478  },
00:06:52.478  {
00:06:52.478  "name": "Passthru0",
00:06:52.478  "aliases": [
00:06:52.478  "4ae8a52f-dd04-55d9-897a-8932e77d2422"
00:06:52.478  ],
00:06:52.478  "product_name": "passthru",
00:06:52.478  "block_size": 512,
00:06:52.478  "num_blocks": 16384,
00:06:52.478  "uuid": "4ae8a52f-dd04-55d9-897a-8932e77d2422",
00:06:52.478  "assigned_rate_limits": {
00:06:52.478  "rw_ios_per_sec": 0,
00:06:52.478  "rw_mbytes_per_sec": 0,
00:06:52.478  "r_mbytes_per_sec": 0,
00:06:52.478  "w_mbytes_per_sec": 0
00:06:52.478  },
00:06:52.478  "claimed": false,
00:06:52.478  "zoned": false,
00:06:52.478  "supported_io_types": {
00:06:52.478  "read": true,
00:06:52.478  "write": true,
00:06:52.478  "unmap": true,
00:06:52.478  "flush": true,
00:06:52.478  "reset": true,
00:06:52.478  "nvme_admin": false,
00:06:52.478  "nvme_io": false,
00:06:52.478  "nvme_io_md": false,
00:06:52.478  "write_zeroes": true,
00:06:52.478  "zcopy": true,
00:06:52.478  "get_zone_info": false,
00:06:52.478  "zone_management": false,
00:06:52.478  "zone_append": false,
00:06:52.478  "compare": false,
00:06:52.478  "compare_and_write": false,
00:06:52.478  "abort": true,
00:06:52.478  "seek_hole": false,
00:06:52.478  "seek_data": false,
00:06:52.478  "copy": true,
00:06:52.478  "nvme_iov_md": false
00:06:52.478  },
00:06:52.478  "memory_domains": [
00:06:52.478  {
00:06:52.478  "dma_device_id": "system",
00:06:52.478  "dma_device_type": 1
00:06:52.478  },
00:06:52.478  {
00:06:52.478  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:52.478  "dma_device_type": 2
00:06:52.478  }
00:06:52.478  ],
00:06:52.478  "driver_specific": {
00:06:52.478  "passthru": {
00:06:52.478  "name": "Passthru0",
00:06:52.478  "base_bdev_name": "Malloc0"
00:06:52.478  }
00:06:52.478  }
00:06:52.478  }
00:06:52.478  ]'
00:06:52.478    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:52.478   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:52.478   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.478   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.478    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:52.478    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.478    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.478    11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.478   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:52.478    11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:52.478   11:28:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:52.478  
00:06:52.478  real	0m0.317s
00:06:52.478  user	0m0.192s
00:06:52.478  sys	0m0.061s
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:52.478   11:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.478  ************************************
00:06:52.478  END TEST rpc_integrity
00:06:52.478  ************************************
00:06:52.478   11:28:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:06:52.478   11:28:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:52.478   11:28:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:52.478   11:28:18 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:52.478  ************************************
00:06:52.478  START TEST rpc_plugins
00:06:52.478  ************************************
00:06:52.478   11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins
00:06:52.478    11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:06:52.478    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.478    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:52.478    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.478   11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:06:52.478    11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:06:52.478    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.478    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:52.478    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.478   11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:06:52.478  {
00:06:52.478  "name": "Malloc1",
00:06:52.478  "aliases": [
00:06:52.478  "2b451e94-8025-434c-ad64-906d260b9d40"
00:06:52.478  ],
00:06:52.478  "product_name": "Malloc disk",
00:06:52.478  "block_size": 4096,
00:06:52.478  "num_blocks": 256,
00:06:52.478  "uuid": "2b451e94-8025-434c-ad64-906d260b9d40",
00:06:52.478  "assigned_rate_limits": {
00:06:52.478  "rw_ios_per_sec": 0,
00:06:52.478  "rw_mbytes_per_sec": 0,
00:06:52.479  "r_mbytes_per_sec": 0,
00:06:52.479  "w_mbytes_per_sec": 0
00:06:52.479  },
00:06:52.479  "claimed": false,
00:06:52.479  "zoned": false,
00:06:52.479  "supported_io_types": {
00:06:52.479  "read": true,
00:06:52.479  "write": true,
00:06:52.479  "unmap": true,
00:06:52.479  "flush": true,
00:06:52.479  "reset": true,
00:06:52.479  "nvme_admin": false,
00:06:52.479  "nvme_io": false,
00:06:52.479  "nvme_io_md": false,
00:06:52.479  "write_zeroes": true,
00:06:52.479  "zcopy": true,
00:06:52.479  "get_zone_info": false,
00:06:52.479  "zone_management": false,
00:06:52.479  "zone_append": false,
00:06:52.479  "compare": false,
00:06:52.479  "compare_and_write": false,
00:06:52.479  "abort": true,
00:06:52.479  "seek_hole": false,
00:06:52.479  "seek_data": false,
00:06:52.479  "copy": true,
00:06:52.479  "nvme_iov_md": false
00:06:52.479  },
00:06:52.479  "memory_domains": [
00:06:52.479  {
00:06:52.479  "dma_device_id": "system",
00:06:52.479  "dma_device_type": 1
00:06:52.479  },
00:06:52.479  {
00:06:52.479  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:52.479  "dma_device_type": 2
00:06:52.479  }
00:06:52.479  ],
00:06:52.479  "driver_specific": {}
00:06:52.479  }
00:06:52.479  ]'
00:06:52.479    11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:06:52.738   11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:06:52.738   11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:06:52.738   11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.738   11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:52.738   11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.738    11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:06:52.738    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.738    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:52.738    11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.738   11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:06:52.738    11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:06:52.738  ************************************
00:06:52.738  END TEST rpc_plugins
00:06:52.738  ************************************
00:06:52.738   11:28:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:06:52.738  
00:06:52.738  real	0m0.154s
00:06:52.738  user	0m0.089s
00:06:52.738  sys	0m0.023s
00:06:52.738   11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:52.738   11:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:52.738   11:28:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:06:52.738   11:28:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:52.738   11:28:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:52.738   11:28:18 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:52.738  ************************************
00:06:52.738  START TEST rpc_trace_cmd_test
00:06:52.738  ************************************
00:06:52.738   11:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test
00:06:52.738   11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:06:52.738    11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:06:52.738    11:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.738    11:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:52.738    11:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.738   11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:06:52.738  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69315",
00:06:52.738  "tpoint_group_mask": "0x8",
00:06:52.738  "iscsi_conn": {
00:06:52.738  "mask": "0x2",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "scsi": {
00:06:52.738  "mask": "0x4",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "bdev": {
00:06:52.738  "mask": "0x8",
00:06:52.738  "tpoint_mask": "0xffffffffffffffff"
00:06:52.738  },
00:06:52.738  "nvmf_rdma": {
00:06:52.738  "mask": "0x10",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "nvmf_tcp": {
00:06:52.738  "mask": "0x20",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "ftl": {
00:06:52.738  "mask": "0x40",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "blobfs": {
00:06:52.738  "mask": "0x80",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "dsa": {
00:06:52.738  "mask": "0x200",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "thread": {
00:06:52.738  "mask": "0x400",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "nvme_pcie": {
00:06:52.738  "mask": "0x800",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "iaa": {
00:06:52.738  "mask": "0x1000",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "nvme_tcp": {
00:06:52.738  "mask": "0x2000",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "bdev_nvme": {
00:06:52.738  "mask": "0x4000",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "sock": {
00:06:52.738  "mask": "0x8000",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "blob": {
00:06:52.738  "mask": "0x10000",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  },
00:06:52.738  "bdev_raid": {
00:06:52.738  "mask": "0x20000",
00:06:52.738  "tpoint_mask": "0x0"
00:06:52.738  }
00:06:52.738  }'
00:06:52.738    11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:06:52.738   11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']'
00:06:52.738    11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:06:52.997   11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:06:52.997    11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:06:52.998   11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:06:52.998    11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:06:52.998   11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:06:52.998    11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:06:52.998  ************************************
00:06:52.998  END TEST rpc_trace_cmd_test
00:06:52.998  ************************************
00:06:52.998   11:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:06:52.998  
00:06:52.998  real	0m0.237s
00:06:52.998  user	0m0.197s
00:06:52.998  sys	0m0.026s
00:06:52.998   11:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:52.998   11:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:52.998   11:28:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:06:52.998   11:28:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:06:52.998   11:28:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:06:52.998   11:28:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:52.998   11:28:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:52.998   11:28:18 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:52.998  ************************************
00:06:52.998  START TEST rpc_daemon_integrity
00:06:52.998  ************************************
00:06:52.998   11:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity
00:06:52.998    11:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:52.998    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:52.998    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:52.998    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:52.998   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:52.998    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:53.257   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:53.257   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.257    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:53.257   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:53.257  {
00:06:53.257  "name": "Malloc2",
00:06:53.257  "aliases": [
00:06:53.257  "c9eedd28-6ba1-49b8-aa0e-0186c173d068"
00:06:53.257  ],
00:06:53.257  "product_name": "Malloc disk",
00:06:53.257  "block_size": 512,
00:06:53.258  "num_blocks": 16384,
00:06:53.258  "uuid": "c9eedd28-6ba1-49b8-aa0e-0186c173d068",
00:06:53.258  "assigned_rate_limits": {
00:06:53.258  "rw_ios_per_sec": 0,
00:06:53.258  "rw_mbytes_per_sec": 0,
00:06:53.258  "r_mbytes_per_sec": 0,
00:06:53.258  "w_mbytes_per_sec": 0
00:06:53.258  },
00:06:53.258  "claimed": false,
00:06:53.258  "zoned": false,
00:06:53.258  "supported_io_types": {
00:06:53.258  "read": true,
00:06:53.258  "write": true,
00:06:53.258  "unmap": true,
00:06:53.258  "flush": true,
00:06:53.258  "reset": true,
00:06:53.258  "nvme_admin": false,
00:06:53.258  "nvme_io": false,
00:06:53.258  "nvme_io_md": false,
00:06:53.258  "write_zeroes": true,
00:06:53.258  "zcopy": true,
00:06:53.258  "get_zone_info": false,
00:06:53.258  "zone_management": false,
00:06:53.258  "zone_append": false,
00:06:53.258  "compare": false,
00:06:53.258  "compare_and_write": false,
00:06:53.258  "abort": true,
00:06:53.258  "seek_hole": false,
00:06:53.258  "seek_data": false,
00:06:53.258  "copy": true,
00:06:53.258  "nvme_iov_md": false
00:06:53.258  },
00:06:53.258  "memory_domains": [
00:06:53.258  {
00:06:53.258  "dma_device_id": "system",
00:06:53.258  "dma_device_type": 1
00:06:53.258  },
00:06:53.258  {
00:06:53.258  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:53.258  "dma_device_type": 2
00:06:53.258  }
00:06:53.258  ],
00:06:53.258  "driver_specific": {}
00:06:53.258  }
00:06:53.258  ]'
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.258  [2024-12-16 11:28:19.155985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:06:53.258  [2024-12-16 11:28:19.156070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:53.258  [2024-12-16 11:28:19.156116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:06:53.258  [2024-12-16 11:28:19.156127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:53.258  [2024-12-16 11:28:19.158938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:53.258  [2024-12-16 11:28:19.159055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:53.258  Passthru0
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:53.258  {
00:06:53.258  "name": "Malloc2",
00:06:53.258  "aliases": [
00:06:53.258  "c9eedd28-6ba1-49b8-aa0e-0186c173d068"
00:06:53.258  ],
00:06:53.258  "product_name": "Malloc disk",
00:06:53.258  "block_size": 512,
00:06:53.258  "num_blocks": 16384,
00:06:53.258  "uuid": "c9eedd28-6ba1-49b8-aa0e-0186c173d068",
00:06:53.258  "assigned_rate_limits": {
00:06:53.258  "rw_ios_per_sec": 0,
00:06:53.258  "rw_mbytes_per_sec": 0,
00:06:53.258  "r_mbytes_per_sec": 0,
00:06:53.258  "w_mbytes_per_sec": 0
00:06:53.258  },
00:06:53.258  "claimed": true,
00:06:53.258  "claim_type": "exclusive_write",
00:06:53.258  "zoned": false,
00:06:53.258  "supported_io_types": {
00:06:53.258  "read": true,
00:06:53.258  "write": true,
00:06:53.258  "unmap": true,
00:06:53.258  "flush": true,
00:06:53.258  "reset": true,
00:06:53.258  "nvme_admin": false,
00:06:53.258  "nvme_io": false,
00:06:53.258  "nvme_io_md": false,
00:06:53.258  "write_zeroes": true,
00:06:53.258  "zcopy": true,
00:06:53.258  "get_zone_info": false,
00:06:53.258  "zone_management": false,
00:06:53.258  "zone_append": false,
00:06:53.258  "compare": false,
00:06:53.258  "compare_and_write": false,
00:06:53.258  "abort": true,
00:06:53.258  "seek_hole": false,
00:06:53.258  "seek_data": false,
00:06:53.258  "copy": true,
00:06:53.258  "nvme_iov_md": false
00:06:53.258  },
00:06:53.258  "memory_domains": [
00:06:53.258  {
00:06:53.258  "dma_device_id": "system",
00:06:53.258  "dma_device_type": 1
00:06:53.258  },
00:06:53.258  {
00:06:53.258  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:53.258  "dma_device_type": 2
00:06:53.258  }
00:06:53.258  ],
00:06:53.258  "driver_specific": {}
00:06:53.258  },
00:06:53.258  {
00:06:53.258  "name": "Passthru0",
00:06:53.258  "aliases": [
00:06:53.258  "7165e7ac-7ce2-5182-8ed4-97d63f51a1c2"
00:06:53.258  ],
00:06:53.258  "product_name": "passthru",
00:06:53.258  "block_size": 512,
00:06:53.258  "num_blocks": 16384,
00:06:53.258  "uuid": "7165e7ac-7ce2-5182-8ed4-97d63f51a1c2",
00:06:53.258  "assigned_rate_limits": {
00:06:53.258  "rw_ios_per_sec": 0,
00:06:53.258  "rw_mbytes_per_sec": 0,
00:06:53.258  "r_mbytes_per_sec": 0,
00:06:53.258  "w_mbytes_per_sec": 0
00:06:53.258  },
00:06:53.258  "claimed": false,
00:06:53.258  "zoned": false,
00:06:53.258  "supported_io_types": {
00:06:53.258  "read": true,
00:06:53.258  "write": true,
00:06:53.258  "unmap": true,
00:06:53.258  "flush": true,
00:06:53.258  "reset": true,
00:06:53.258  "nvme_admin": false,
00:06:53.258  "nvme_io": false,
00:06:53.258  "nvme_io_md": false,
00:06:53.258  "write_zeroes": true,
00:06:53.258  "zcopy": true,
00:06:53.258  "get_zone_info": false,
00:06:53.258  "zone_management": false,
00:06:53.258  "zone_append": false,
00:06:53.258  "compare": false,
00:06:53.258  "compare_and_write": false,
00:06:53.258  "abort": true,
00:06:53.258  "seek_hole": false,
00:06:53.258  "seek_data": false,
00:06:53.258  "copy": true,
00:06:53.258  "nvme_iov_md": false
00:06:53.258  },
00:06:53.258  "memory_domains": [
00:06:53.258  {
00:06:53.258  "dma_device_id": "system",
00:06:53.258  "dma_device_type": 1
00:06:53.258  },
00:06:53.258  {
00:06:53.258  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:53.258  "dma_device_type": 2
00:06:53.258  }
00:06:53.258  ],
00:06:53.258  "driver_specific": {
00:06:53.258  "passthru": {
00:06:53.258  "name": "Passthru0",
00:06:53.258  "base_bdev_name": "Malloc2"
00:06:53.258  }
00:06:53.258  }
00:06:53.258  }
00:06:53.258  ]'
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:53.258    11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:53.258  
00:06:53.258  real	0m0.324s
00:06:53.258  user	0m0.200s
00:06:53.258  sys	0m0.046s
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:53.258   11:28:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:53.258  ************************************
00:06:53.258  END TEST rpc_daemon_integrity
00:06:53.617  ************************************
00:06:53.617   11:28:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:06:53.617   11:28:19 rpc -- rpc/rpc.sh@84 -- # killprocess 69315
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@950 -- # '[' -z 69315 ']'
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@954 -- # kill -0 69315
00:06:53.617    11:28:19 rpc -- common/autotest_common.sh@955 -- # uname
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:06:53.617    11:28:19 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69315
00:06:53.617  killing process with pid 69315
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69315'
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@969 -- # kill 69315
00:06:53.617   11:28:19 rpc -- common/autotest_common.sh@974 -- # wait 69315
00:06:53.905  
00:06:53.905  real	0m2.894s
00:06:53.905  user	0m3.465s
00:06:53.905  sys	0m0.873s
00:06:53.905   11:28:19 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:53.905   11:28:19 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:53.905  ************************************
00:06:53.905  END TEST rpc
00:06:53.905  ************************************
00:06:53.905   11:28:19  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:53.905   11:28:19  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:53.905   11:28:19  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:53.905   11:28:19  -- common/autotest_common.sh@10 -- # set +x
00:06:53.905  ************************************
00:06:53.905  START TEST skip_rpc
00:06:53.905  ************************************
00:06:53.905   11:28:19 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:54.164  * Looking for test storage...
00:06:54.164  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:54.164    11:28:20 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:06:54.164     11:28:20 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version
00:06:54.164     11:28:20 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:06:54.164    11:28:20 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@345 -- # : 1
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:54.164     11:28:20 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:54.164    11:28:20 skip_rpc -- scripts/common.sh@368 -- # return 0
00:06:54.164    11:28:20 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:54.164    11:28:20 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:06:54.164  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.164  		--rc genhtml_branch_coverage=1
00:06:54.164  		--rc genhtml_function_coverage=1
00:06:54.164  		--rc genhtml_legend=1
00:06:54.164  		--rc geninfo_all_blocks=1
00:06:54.164  		--rc geninfo_unexecuted_blocks=1
00:06:54.164  		
00:06:54.164  		'
00:06:54.164    11:28:20 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:06:54.164  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.164  		--rc genhtml_branch_coverage=1
00:06:54.164  		--rc genhtml_function_coverage=1
00:06:54.164  		--rc genhtml_legend=1
00:06:54.164  		--rc geninfo_all_blocks=1
00:06:54.164  		--rc geninfo_unexecuted_blocks=1
00:06:54.164  		
00:06:54.164  		'
00:06:54.164    11:28:20 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:06:54.164  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.164  		--rc genhtml_branch_coverage=1
00:06:54.164  		--rc genhtml_function_coverage=1
00:06:54.164  		--rc genhtml_legend=1
00:06:54.164  		--rc geninfo_all_blocks=1
00:06:54.164  		--rc geninfo_unexecuted_blocks=1
00:06:54.164  		
00:06:54.164  		'
00:06:54.164    11:28:20 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:06:54.164  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:54.164  		--rc genhtml_branch_coverage=1
00:06:54.164  		--rc genhtml_function_coverage=1
00:06:54.164  		--rc genhtml_legend=1
00:06:54.164  		--rc geninfo_all_blocks=1
00:06:54.164  		--rc geninfo_unexecuted_blocks=1
00:06:54.164  		
00:06:54.164  		'
00:06:54.164   11:28:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:54.164   11:28:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:54.164   11:28:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:06:54.164   11:28:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:54.164   11:28:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:54.164   11:28:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:54.164  ************************************
00:06:54.164  START TEST skip_rpc
00:06:54.164  ************************************
00:06:54.164   11:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc
00:06:54.164   11:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69522
00:06:54.164   11:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:06:54.164   11:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:54.164   11:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:06:54.423  [2024-12-16 11:28:20.239708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:06:54.423  [2024-12-16 11:28:20.240377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69522 ]
00:06:54.423  [2024-12-16 11:28:20.407343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:54.423  [2024-12-16 11:28:20.455232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:59.687    11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:06:59.687   11:28:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69522
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69522 ']'
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69522
00:06:59.688    11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:06:59.688    11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69522
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69522'
00:06:59.688  killing process with pid 69522
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69522
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69522
00:06:59.688  
00:06:59.688  real	0m5.463s
00:06:59.688  user	0m5.023s
00:06:59.688  sys	0m0.357s
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable
00:06:59.688   11:28:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:59.688  ************************************
00:06:59.688  END TEST skip_rpc
00:06:59.688  ************************************
00:06:59.688   11:28:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:06:59.688   11:28:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:06:59.688   11:28:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:06:59.688   11:28:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:59.688  ************************************
00:06:59.688  START TEST skip_rpc_with_json
00:06:59.688  ************************************
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69609
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69609
00:06:59.688  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69609 ']'
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable
00:06:59.688   11:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:59.993  [2024-12-16 11:28:25.764382] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:06:59.993  [2024-12-16 11:28:25.764518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69609 ]
00:06:59.993  [2024-12-16 11:28:25.913509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:59.993  [2024-12-16 11:28:25.968462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:00.927  [2024-12-16 11:28:26.634354] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:07:00.927  request:
00:07:00.927  {
00:07:00.927  "trtype": "tcp",
00:07:00.927  "method": "nvmf_get_transports",
00:07:00.927  "req_id": 1
00:07:00.927  }
00:07:00.927  Got JSON-RPC error response
00:07:00.927  response:
00:07:00.927  {
00:07:00.927  "code": -19,
00:07:00.927  "message": "No such device"
00:07:00.927  }
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:00.927  [2024-12-16 11:28:26.646484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:00.927   11:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:07:00.927  {
00:07:00.927  "subsystems": [
00:07:00.927  {
00:07:00.927  "subsystem": "fsdev",
00:07:00.927  "config": [
00:07:00.927  {
00:07:00.927  "method": "fsdev_set_opts",
00:07:00.927  "params": {
00:07:00.927  "fsdev_io_pool_size": 65535,
00:07:00.927  "fsdev_io_cache_size": 256
00:07:00.927  }
00:07:00.927  }
00:07:00.927  ]
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "subsystem": "keyring",
00:07:00.927  "config": []
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "subsystem": "iobuf",
00:07:00.927  "config": [
00:07:00.927  {
00:07:00.927  "method": "iobuf_set_options",
00:07:00.927  "params": {
00:07:00.927  "small_pool_count": 8192,
00:07:00.927  "large_pool_count": 1024,
00:07:00.927  "small_bufsize": 8192,
00:07:00.927  "large_bufsize": 135168
00:07:00.927  }
00:07:00.927  }
00:07:00.927  ]
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "subsystem": "sock",
00:07:00.927  "config": [
00:07:00.927  {
00:07:00.927  "method": "sock_set_default_impl",
00:07:00.927  "params": {
00:07:00.927  "impl_name": "posix"
00:07:00.927  }
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "method": "sock_impl_set_options",
00:07:00.927  "params": {
00:07:00.927  "impl_name": "ssl",
00:07:00.927  "recv_buf_size": 4096,
00:07:00.927  "send_buf_size": 4096,
00:07:00.927  "enable_recv_pipe": true,
00:07:00.927  "enable_quickack": false,
00:07:00.927  "enable_placement_id": 0,
00:07:00.927  "enable_zerocopy_send_server": true,
00:07:00.927  "enable_zerocopy_send_client": false,
00:07:00.927  "zerocopy_threshold": 0,
00:07:00.927  "tls_version": 0,
00:07:00.927  "enable_ktls": false
00:07:00.927  }
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "method": "sock_impl_set_options",
00:07:00.927  "params": {
00:07:00.927  "impl_name": "posix",
00:07:00.927  "recv_buf_size": 2097152,
00:07:00.927  "send_buf_size": 2097152,
00:07:00.927  "enable_recv_pipe": true,
00:07:00.927  "enable_quickack": false,
00:07:00.927  "enable_placement_id": 0,
00:07:00.927  "enable_zerocopy_send_server": true,
00:07:00.927  "enable_zerocopy_send_client": false,
00:07:00.927  "zerocopy_threshold": 0,
00:07:00.927  "tls_version": 0,
00:07:00.927  "enable_ktls": false
00:07:00.927  }
00:07:00.927  }
00:07:00.927  ]
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "subsystem": "vmd",
00:07:00.927  "config": []
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "subsystem": "accel",
00:07:00.927  "config": [
00:07:00.927  {
00:07:00.927  "method": "accel_set_options",
00:07:00.927  "params": {
00:07:00.927  "small_cache_size": 128,
00:07:00.927  "large_cache_size": 16,
00:07:00.927  "task_count": 2048,
00:07:00.927  "sequence_count": 2048,
00:07:00.927  "buf_count": 2048
00:07:00.927  }
00:07:00.927  }
00:07:00.927  ]
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "subsystem": "bdev",
00:07:00.927  "config": [
00:07:00.927  {
00:07:00.927  "method": "bdev_set_options",
00:07:00.927  "params": {
00:07:00.927  "bdev_io_pool_size": 65535,
00:07:00.927  "bdev_io_cache_size": 256,
00:07:00.927  "bdev_auto_examine": true,
00:07:00.927  "iobuf_small_cache_size": 128,
00:07:00.927  "iobuf_large_cache_size": 16
00:07:00.927  }
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "method": "bdev_raid_set_options",
00:07:00.927  "params": {
00:07:00.927  "process_window_size_kb": 1024,
00:07:00.927  "process_max_bandwidth_mb_sec": 0
00:07:00.927  }
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "method": "bdev_iscsi_set_options",
00:07:00.927  "params": {
00:07:00.927  "timeout_sec": 30
00:07:00.927  }
00:07:00.927  },
00:07:00.927  {
00:07:00.927  "method": "bdev_nvme_set_options",
00:07:00.927  "params": {
00:07:00.927  "action_on_timeout": "none",
00:07:00.927  "timeout_us": 0,
00:07:00.927  "timeout_admin_us": 0,
00:07:00.927  "keep_alive_timeout_ms": 10000,
00:07:00.927  "arbitration_burst": 0,
00:07:00.927  "low_priority_weight": 0,
00:07:00.928  "medium_priority_weight": 0,
00:07:00.928  "high_priority_weight": 0,
00:07:00.928  "nvme_adminq_poll_period_us": 10000,
00:07:00.928  "nvme_ioq_poll_period_us": 0,
00:07:00.928  "io_queue_requests": 0,
00:07:00.928  "delay_cmd_submit": true,
00:07:00.928  "transport_retry_count": 4,
00:07:00.928  "bdev_retry_count": 3,
00:07:00.928  "transport_ack_timeout": 0,
00:07:00.928  "ctrlr_loss_timeout_sec": 0,
00:07:00.928  "reconnect_delay_sec": 0,
00:07:00.928  "fast_io_fail_timeout_sec": 0,
00:07:00.928  "disable_auto_failback": false,
00:07:00.928  "generate_uuids": false,
00:07:00.928  "transport_tos": 0,
00:07:00.928  "nvme_error_stat": false,
00:07:00.928  "rdma_srq_size": 0,
00:07:00.928  "io_path_stat": false,
00:07:00.928  "allow_accel_sequence": false,
00:07:00.928  "rdma_max_cq_size": 0,
00:07:00.928  "rdma_cm_event_timeout_ms": 0,
00:07:00.928  "dhchap_digests": [
00:07:00.928  "sha256",
00:07:00.928  "sha384",
00:07:00.928  "sha512"
00:07:00.928  ],
00:07:00.928  "dhchap_dhgroups": [
00:07:00.928  "null",
00:07:00.928  "ffdhe2048",
00:07:00.928  "ffdhe3072",
00:07:00.928  "ffdhe4096",
00:07:00.928  "ffdhe6144",
00:07:00.928  "ffdhe8192"
00:07:00.928  ]
00:07:00.928  }
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "method": "bdev_nvme_set_hotplug",
00:07:00.928  "params": {
00:07:00.928  "period_us": 100000,
00:07:00.928  "enable": false
00:07:00.928  }
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "method": "bdev_wait_for_examine"
00:07:00.928  }
00:07:00.928  ]
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "scsi",
00:07:00.928  "config": null
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "scheduler",
00:07:00.928  "config": [
00:07:00.928  {
00:07:00.928  "method": "framework_set_scheduler",
00:07:00.928  "params": {
00:07:00.928  "name": "static"
00:07:00.928  }
00:07:00.928  }
00:07:00.928  ]
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "vhost_scsi",
00:07:00.928  "config": []
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "vhost_blk",
00:07:00.928  "config": []
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "ublk",
00:07:00.928  "config": []
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "nbd",
00:07:00.928  "config": []
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "nvmf",
00:07:00.928  "config": [
00:07:00.928  {
00:07:00.928  "method": "nvmf_set_config",
00:07:00.928  "params": {
00:07:00.928  "discovery_filter": "match_any",
00:07:00.928  "admin_cmd_passthru": {
00:07:00.928  "identify_ctrlr": false
00:07:00.928  },
00:07:00.928  "dhchap_digests": [
00:07:00.928  "sha256",
00:07:00.928  "sha384",
00:07:00.928  "sha512"
00:07:00.928  ],
00:07:00.928  "dhchap_dhgroups": [
00:07:00.928  "null",
00:07:00.928  "ffdhe2048",
00:07:00.928  "ffdhe3072",
00:07:00.928  "ffdhe4096",
00:07:00.928  "ffdhe6144",
00:07:00.928  "ffdhe8192"
00:07:00.928  ]
00:07:00.928  }
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "method": "nvmf_set_max_subsystems",
00:07:00.928  "params": {
00:07:00.928  "max_subsystems": 1024
00:07:00.928  }
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "method": "nvmf_set_crdt",
00:07:00.928  "params": {
00:07:00.928  "crdt1": 0,
00:07:00.928  "crdt2": 0,
00:07:00.928  "crdt3": 0
00:07:00.928  }
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "method": "nvmf_create_transport",
00:07:00.928  "params": {
00:07:00.928  "trtype": "TCP",
00:07:00.928  "max_queue_depth": 128,
00:07:00.928  "max_io_qpairs_per_ctrlr": 127,
00:07:00.928  "in_capsule_data_size": 4096,
00:07:00.928  "max_io_size": 131072,
00:07:00.928  "io_unit_size": 131072,
00:07:00.928  "max_aq_depth": 128,
00:07:00.928  "num_shared_buffers": 511,
00:07:00.928  "buf_cache_size": 4294967295,
00:07:00.928  "dif_insert_or_strip": false,
00:07:00.928  "zcopy": false,
00:07:00.928  "c2h_success": true,
00:07:00.928  "sock_priority": 0,
00:07:00.928  "abort_timeout_sec": 1,
00:07:00.928  "ack_timeout": 0,
00:07:00.928  "data_wr_pool_size": 0
00:07:00.928  }
00:07:00.928  }
00:07:00.928  ]
00:07:00.928  },
00:07:00.928  {
00:07:00.928  "subsystem": "iscsi",
00:07:00.928  "config": [
00:07:00.928  {
00:07:00.928  "method": "iscsi_set_options",
00:07:00.928  "params": {
00:07:00.928  "node_base": "iqn.2016-06.io.spdk",
00:07:00.928  "max_sessions": 128,
00:07:00.928  "max_connections_per_session": 2,
00:07:00.928  "max_queue_depth": 64,
00:07:00.928  "default_time2wait": 2,
00:07:00.928  "default_time2retain": 20,
00:07:00.928  "first_burst_length": 8192,
00:07:00.928  "immediate_data": true,
00:07:00.928  "allow_duplicated_isid": false,
00:07:00.928  "error_recovery_level": 0,
00:07:00.928  "nop_timeout": 60,
00:07:00.928  "nop_in_interval": 30,
00:07:00.928  "disable_chap": false,
00:07:00.928  "require_chap": false,
00:07:00.928  "mutual_chap": false,
00:07:00.928  "chap_group": 0,
00:07:00.928  "max_large_datain_per_connection": 64,
00:07:00.928  "max_r2t_per_connection": 4,
00:07:00.928  "pdu_pool_size": 36864,
00:07:00.928  "immediate_data_pool_size": 16384,
00:07:00.928  "data_out_pool_size": 2048
00:07:00.928  }
00:07:00.928  }
00:07:00.928  ]
00:07:00.928  }
00:07:00.928  ]
00:07:00.928  }
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69609
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69609 ']'
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69609
00:07:00.928    11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:00.928    11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69609
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69609'
00:07:00.928  killing process with pid 69609
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69609
00:07:00.928   11:28:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69609
00:07:01.496   11:28:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69638
00:07:01.496   11:28:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:07:01.496   11:28:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69638
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69638 ']'
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69638
00:07:06.773    11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:06.773    11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69638
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69638'
00:07:06.773  killing process with pid 69638
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69638
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69638
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:07:06.773  ************************************
00:07:06.773  END TEST skip_rpc_with_json
00:07:06.773  ************************************
00:07:06.773  
00:07:06.773  real	0m7.064s
00:07:06.773  user	0m6.612s
00:07:06.773  sys	0m0.763s
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:06.773   11:28:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:07:06.773   11:28:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:06.773   11:28:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:06.773   11:28:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:06.773  ************************************
00:07:06.773  START TEST skip_rpc_with_delay
00:07:06.773  ************************************
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:06.773    11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:06.773    11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:07:06.773   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:07:07.032  [2024-12-16 11:28:32.910867] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:07:07.032  [2024-12-16 11:28:32.911170] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2
00:07:07.032   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1
00:07:07.032   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:07.032  ************************************
00:07:07.032  END TEST skip_rpc_with_delay
00:07:07.032  ************************************
00:07:07.032   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:07.032   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:07.032  
00:07:07.032  real	0m0.188s
00:07:07.032  user	0m0.099s
00:07:07.032  sys	0m0.086s
00:07:07.032   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:07.032   11:28:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:07:07.032    11:28:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:07:07.032   11:28:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:07:07.032   11:28:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:07:07.032   11:28:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:07.032   11:28:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:07.032   11:28:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:07.032  ************************************
00:07:07.032  START TEST exit_on_failed_rpc_init
00:07:07.032  ************************************
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69749
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69749
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69749 ']'
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:07.032  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:07.032   11:28:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:07:07.292  [2024-12-16 11:28:33.160152] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:07.292  [2024-12-16 11:28:33.160400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69749 ]
00:07:07.292  [2024-12-16 11:28:33.327588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:07.573  [2024-12-16 11:28:33.375587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:08.142   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:08.142    11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:08.143   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:08.143    11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:08.143   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:08.143   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:08.143   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:07:08.143   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:07:08.143  [2024-12-16 11:28:34.145229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:08.143  [2024-12-16 11:28:34.145426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69767 ]
00:07:08.402  [2024-12-16 11:28:34.309381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:08.402  [2024-12-16 11:28:34.360872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:08.402  [2024-12-16 11:28:34.361063] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:07:08.402  [2024-12-16 11:28:34.361123] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:07:08.402  [2024-12-16 11:28:34.361234] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69749
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69749 ']'
00:07:08.662   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69749
00:07:08.662    11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname
00:07:08.663   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:08.663    11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69749
00:07:08.663   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:08.663   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:08.663  killing process with pid 69749
00:07:08.663   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69749'
00:07:08.663   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69749
00:07:08.663   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69749
00:07:08.921  
00:07:08.921  real	0m1.878s
00:07:08.921  user	0m2.072s
00:07:08.921  sys	0m0.549s
00:07:08.921   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:08.921  ************************************
00:07:08.921  END TEST exit_on_failed_rpc_init
00:07:08.921  ************************************
00:07:08.921   11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:07:08.921   11:28:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:07:08.921  
00:07:08.921  real	0m15.085s
00:07:08.921  user	0m14.015s
00:07:08.921  sys	0m2.051s
00:07:08.921   11:28:34 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:08.921  ************************************
00:07:08.921  END TEST skip_rpc
00:07:08.921  ************************************
00:07:08.921   11:28:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:09.179   11:28:35  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:07:09.179   11:28:35  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:09.179   11:28:35  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:09.179   11:28:35  -- common/autotest_common.sh@10 -- # set +x
00:07:09.179  ************************************
00:07:09.179  START TEST rpc_client
00:07:09.179  ************************************
00:07:09.179   11:28:35 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:07:09.179  * Looking for test storage...
00:07:09.179  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:07:09.179    11:28:35 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:09.179     11:28:35 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:09.179     11:28:35 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version
00:07:09.179    11:28:35 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@345 -- # : 1
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:09.179     11:28:35 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:07:09.179     11:28:35 rpc_client -- scripts/common.sh@353 -- # local d=1
00:07:09.179     11:28:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:09.179     11:28:35 rpc_client -- scripts/common.sh@355 -- # echo 1
00:07:09.179    11:28:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:07:09.179     11:28:35 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:07:09.438     11:28:35 rpc_client -- scripts/common.sh@353 -- # local d=2
00:07:09.438     11:28:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:09.438     11:28:35 rpc_client -- scripts/common.sh@355 -- # echo 2
00:07:09.438    11:28:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:07:09.438    11:28:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:09.438    11:28:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:09.438    11:28:35 rpc_client -- scripts/common.sh@368 -- # return 0
00:07:09.438    11:28:35 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:09.438    11:28:35 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:09.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.438  		--rc genhtml_branch_coverage=1
00:07:09.438  		--rc genhtml_function_coverage=1
00:07:09.438  		--rc genhtml_legend=1
00:07:09.438  		--rc geninfo_all_blocks=1
00:07:09.438  		--rc geninfo_unexecuted_blocks=1
00:07:09.438  		
00:07:09.438  		'
00:07:09.438    11:28:35 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:09.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.438  		--rc genhtml_branch_coverage=1
00:07:09.438  		--rc genhtml_function_coverage=1
00:07:09.438  		--rc genhtml_legend=1
00:07:09.438  		--rc geninfo_all_blocks=1
00:07:09.438  		--rc geninfo_unexecuted_blocks=1
00:07:09.438  		
00:07:09.438  		'
00:07:09.438    11:28:35 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:09.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.438  		--rc genhtml_branch_coverage=1
00:07:09.438  		--rc genhtml_function_coverage=1
00:07:09.438  		--rc genhtml_legend=1
00:07:09.438  		--rc geninfo_all_blocks=1
00:07:09.438  		--rc geninfo_unexecuted_blocks=1
00:07:09.438  		
00:07:09.438  		'
00:07:09.438    11:28:35 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:09.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.438  		--rc genhtml_branch_coverage=1
00:07:09.438  		--rc genhtml_function_coverage=1
00:07:09.438  		--rc genhtml_legend=1
00:07:09.438  		--rc geninfo_all_blocks=1
00:07:09.438  		--rc geninfo_unexecuted_blocks=1
00:07:09.438  		
00:07:09.438  		'
00:07:09.438   11:28:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:07:09.438  OK
00:07:09.438   11:28:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:07:09.438  
00:07:09.438  real	0m0.269s
00:07:09.438  user	0m0.139s
00:07:09.438  sys	0m0.144s
00:07:09.438   11:28:35 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:09.438   11:28:35 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:07:09.438  ************************************
00:07:09.438  END TEST rpc_client
00:07:09.438  ************************************
00:07:09.438   11:28:35  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:07:09.438   11:28:35  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:09.438   11:28:35  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:09.438   11:28:35  -- common/autotest_common.sh@10 -- # set +x
00:07:09.438  ************************************
00:07:09.438  START TEST json_config
00:07:09.438  ************************************
00:07:09.438   11:28:35 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:07:09.438    11:28:35 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:09.438     11:28:35 json_config -- common/autotest_common.sh@1681 -- # lcov --version
00:07:09.438     11:28:35 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:09.699    11:28:35 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:09.699    11:28:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:09.699    11:28:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:09.699    11:28:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:09.699    11:28:35 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:07:09.699    11:28:35 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:07:09.699    11:28:35 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:07:09.699    11:28:35 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:07:09.699    11:28:35 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:07:09.699    11:28:35 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:07:09.699    11:28:35 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:07:09.699    11:28:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:09.699    11:28:35 json_config -- scripts/common.sh@344 -- # case "$op" in
00:07:09.699    11:28:35 json_config -- scripts/common.sh@345 -- # : 1
00:07:09.699    11:28:35 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:09.699    11:28:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:09.699     11:28:35 json_config -- scripts/common.sh@365 -- # decimal 1
00:07:09.699     11:28:35 json_config -- scripts/common.sh@353 -- # local d=1
00:07:09.699     11:28:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:09.699     11:28:35 json_config -- scripts/common.sh@355 -- # echo 1
00:07:09.699    11:28:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:07:09.699     11:28:35 json_config -- scripts/common.sh@366 -- # decimal 2
00:07:09.699     11:28:35 json_config -- scripts/common.sh@353 -- # local d=2
00:07:09.699     11:28:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:09.699     11:28:35 json_config -- scripts/common.sh@355 -- # echo 2
00:07:09.699    11:28:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:07:09.699    11:28:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:09.699    11:28:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:09.699    11:28:35 json_config -- scripts/common.sh@368 -- # return 0
00:07:09.699    11:28:35 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:09.699    11:28:35 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:09.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.699  		--rc genhtml_branch_coverage=1
00:07:09.699  		--rc genhtml_function_coverage=1
00:07:09.699  		--rc genhtml_legend=1
00:07:09.699  		--rc geninfo_all_blocks=1
00:07:09.699  		--rc geninfo_unexecuted_blocks=1
00:07:09.699  		
00:07:09.699  		'
00:07:09.699    11:28:35 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:09.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.699  		--rc genhtml_branch_coverage=1
00:07:09.699  		--rc genhtml_function_coverage=1
00:07:09.699  		--rc genhtml_legend=1
00:07:09.699  		--rc geninfo_all_blocks=1
00:07:09.699  		--rc geninfo_unexecuted_blocks=1
00:07:09.699  		
00:07:09.699  		'
00:07:09.699    11:28:35 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:09.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.699  		--rc genhtml_branch_coverage=1
00:07:09.699  		--rc genhtml_function_coverage=1
00:07:09.699  		--rc genhtml_legend=1
00:07:09.699  		--rc geninfo_all_blocks=1
00:07:09.699  		--rc geninfo_unexecuted_blocks=1
00:07:09.699  		
00:07:09.699  		'
00:07:09.699    11:28:35 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:09.699  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.699  		--rc genhtml_branch_coverage=1
00:07:09.699  		--rc genhtml_function_coverage=1
00:07:09.699  		--rc genhtml_legend=1
00:07:09.699  		--rc geninfo_all_blocks=1
00:07:09.699  		--rc geninfo_unexecuted_blocks=1
00:07:09.699  		
00:07:09.699  		'
00:07:09.699   11:28:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:09.699     11:28:35 json_config -- nvmf/common.sh@7 -- # uname -s
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:09.699     11:28:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52311dfc-f4ec-4043-8e88-1c9590101b2f
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=52311dfc-f4ec-4043-8e88-1c9590101b2f
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:09.699    11:28:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:09.699     11:28:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:07:09.699     11:28:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:09.699     11:28:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:09.699     11:28:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:09.699      11:28:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.700      11:28:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.700      11:28:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.700      11:28:35 json_config -- paths/export.sh@5 -- # export PATH
00:07:09.700      11:28:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@51 -- # : 0
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:09.700  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:09.700    11:28:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:09.700   11:28:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:07:09.700   11:28:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:07:09.700   11:28:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:07:09.700   11:28:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:07:09.700   11:28:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:07:09.700   11:28:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:07:09.700  WARNING: No tests are enabled so not running JSON configuration tests
00:07:09.700   11:28:35 json_config -- json_config/json_config.sh@28 -- # exit 0
00:07:09.700  
00:07:09.700  real	0m0.220s
00:07:09.700  user	0m0.123s
00:07:09.700  sys	0m0.098s
00:07:09.700   11:28:35 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:09.700   11:28:35 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:09.700  ************************************
00:07:09.700  END TEST json_config
00:07:09.700  ************************************
00:07:09.700   11:28:35  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:07:09.700   11:28:35  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:09.700   11:28:35  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:09.700   11:28:35  -- common/autotest_common.sh@10 -- # set +x
00:07:09.700  ************************************
00:07:09.700  START TEST json_config_extra_key
00:07:09.700  ************************************
00:07:09.700   11:28:35 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:07:09.700    11:28:35 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:09.700     11:28:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version
00:07:09.700     11:28:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:09.960    11:28:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:09.960    11:28:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:07:09.960    11:28:35 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:09.960    11:28:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.960  		--rc genhtml_branch_coverage=1
00:07:09.960  		--rc genhtml_function_coverage=1
00:07:09.960  		--rc genhtml_legend=1
00:07:09.960  		--rc geninfo_all_blocks=1
00:07:09.960  		--rc geninfo_unexecuted_blocks=1
00:07:09.960  		
00:07:09.960  		'
00:07:09.960    11:28:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.960  		--rc genhtml_branch_coverage=1
00:07:09.960  		--rc genhtml_function_coverage=1
00:07:09.960  		--rc genhtml_legend=1
00:07:09.960  		--rc geninfo_all_blocks=1
00:07:09.960  		--rc geninfo_unexecuted_blocks=1
00:07:09.960  		
00:07:09.960  		'
00:07:09.960    11:28:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.960  		--rc genhtml_branch_coverage=1
00:07:09.960  		--rc genhtml_function_coverage=1
00:07:09.960  		--rc genhtml_legend=1
00:07:09.960  		--rc geninfo_all_blocks=1
00:07:09.960  		--rc geninfo_unexecuted_blocks=1
00:07:09.960  		
00:07:09.960  		'
00:07:09.960    11:28:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.960  		--rc genhtml_branch_coverage=1
00:07:09.960  		--rc genhtml_function_coverage=1
00:07:09.960  		--rc genhtml_legend=1
00:07:09.960  		--rc geninfo_all_blocks=1
00:07:09.960  		--rc geninfo_unexecuted_blocks=1
00:07:09.960  		
00:07:09.960  		'
00:07:09.960   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:07:09.960     11:28:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:09.960     11:28:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:52311dfc-f4ec-4043-8e88-1c9590101b2f
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=52311dfc-f4ec-4043-8e88-1c9590101b2f
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:09.960    11:28:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:09.960     11:28:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:09.961      11:28:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.961      11:28:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.961      11:28:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.961      11:28:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:07:09.961      11:28:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:09.961  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:09.961    11:28:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:07:09.961  INFO: launching applications...
00:07:09.961   11:28:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69950
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:07:09.961  Waiting for target to run...
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69950 /var/tmp/spdk_tgt.sock
00:07:09.961  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:09.961   11:28:35 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69950 ']'
00:07:09.961   11:28:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:09.961   11:28:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:07:09.961   11:28:35 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:09.961   11:28:35 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:09.961   11:28:35 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:09.961   11:28:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:07:09.961  [2024-12-16 11:28:35.977527] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:09.961  [2024-12-16 11:28:35.977789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69950 ]
00:07:10.530  [2024-12-16 11:28:36.354490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:10.530  [2024-12-16 11:28:36.396053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:10.790   11:28:36 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:10.790   11:28:36 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0
00:07:10.790  
00:07:10.790  INFO: shutting down applications...
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:07:10.790   11:28:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:07:10.790   11:28:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69950 ]]
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69950
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69950
00:07:10.790   11:28:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:11.359   11:28:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:11.359   11:28:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:11.359   11:28:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69950
00:07:11.359   11:28:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:07:11.359   11:28:37 json_config_extra_key -- json_config/common.sh@43 -- # break
00:07:11.359   11:28:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:07:11.359   11:28:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:07:11.359  SPDK target shutdown done
00:07:11.359   11:28:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:07:11.359  Success
00:07:11.359  
00:07:11.359  real	0m1.696s
00:07:11.359  user	0m1.446s
00:07:11.359  sys	0m0.476s
00:07:11.359   11:28:37 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:11.359   11:28:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:07:11.359  ************************************
00:07:11.359  END TEST json_config_extra_key
00:07:11.359  ************************************
00:07:11.359   11:28:37  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:11.359   11:28:37  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:11.359   11:28:37  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:11.359   11:28:37  -- common/autotest_common.sh@10 -- # set +x
00:07:11.359  ************************************
00:07:11.359  START TEST alias_rpc
00:07:11.359  ************************************
00:07:11.359   11:28:37 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:11.618  * Looking for test storage...
00:07:11.618  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:07:11.618    11:28:37 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:11.618     11:28:37 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:11.618     11:28:37 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version
00:07:11.618    11:28:37 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@345 -- # : 1
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:11.618    11:28:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:11.618     11:28:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:07:11.618     11:28:37 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:07:11.618     11:28:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:11.618     11:28:37 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:07:11.619    11:28:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:07:11.619     11:28:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:07:11.619     11:28:37 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:07:11.619     11:28:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:11.619     11:28:37 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:07:11.619    11:28:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:07:11.619    11:28:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:11.619    11:28:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:11.619    11:28:37 alias_rpc -- scripts/common.sh@368 -- # return 0
00:07:11.619    11:28:37 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:11.619    11:28:37 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:11.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:11.619  		--rc genhtml_branch_coverage=1
00:07:11.619  		--rc genhtml_function_coverage=1
00:07:11.619  		--rc genhtml_legend=1
00:07:11.619  		--rc geninfo_all_blocks=1
00:07:11.619  		--rc geninfo_unexecuted_blocks=1
00:07:11.619  		
00:07:11.619  		'
00:07:11.619    11:28:37 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:11.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:11.619  		--rc genhtml_branch_coverage=1
00:07:11.619  		--rc genhtml_function_coverage=1
00:07:11.619  		--rc genhtml_legend=1
00:07:11.619  		--rc geninfo_all_blocks=1
00:07:11.619  		--rc geninfo_unexecuted_blocks=1
00:07:11.619  		
00:07:11.619  		'
00:07:11.619    11:28:37 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:11.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:11.619  		--rc genhtml_branch_coverage=1
00:07:11.619  		--rc genhtml_function_coverage=1
00:07:11.619  		--rc genhtml_legend=1
00:07:11.619  		--rc geninfo_all_blocks=1
00:07:11.619  		--rc geninfo_unexecuted_blocks=1
00:07:11.619  		
00:07:11.619  		'
00:07:11.619    11:28:37 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:11.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:11.619  		--rc genhtml_branch_coverage=1
00:07:11.619  		--rc genhtml_function_coverage=1
00:07:11.619  		--rc genhtml_legend=1
00:07:11.619  		--rc geninfo_all_blocks=1
00:07:11.619  		--rc geninfo_unexecuted_blocks=1
00:07:11.619  		
00:07:11.619  		'
00:07:11.619   11:28:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:07:11.619   11:28:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70029
00:07:11.619   11:28:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:11.619   11:28:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70029
00:07:11.619   11:28:37 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70029 ']'
00:07:11.619   11:28:37 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:11.619   11:28:37 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:11.619   11:28:37 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:11.619  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:11.619   11:28:37 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:11.619   11:28:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:11.923  [2024-12-16 11:28:37.722132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:11.923  [2024-12-16 11:28:37.722371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70029 ]
00:07:11.923  [2024-12-16 11:28:37.867442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:11.923  [2024-12-16 11:28:37.919984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@864 -- # return 0
00:07:12.868   11:28:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:07:12.868   11:28:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70029
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70029 ']'
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70029
00:07:12.868    11:28:38 alias_rpc -- common/autotest_common.sh@955 -- # uname
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:12.868    11:28:38 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70029
00:07:12.868  killing process with pid 70029
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70029'
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@969 -- # kill 70029
00:07:12.868   11:28:38 alias_rpc -- common/autotest_common.sh@974 -- # wait 70029
00:07:13.436  ************************************
00:07:13.436  END TEST alias_rpc
00:07:13.436  ************************************
00:07:13.436  
00:07:13.436  real	0m1.871s
00:07:13.436  user	0m1.950s
00:07:13.436  sys	0m0.516s
00:07:13.436   11:28:39 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:13.436   11:28:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:13.436   11:28:39  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:07:13.436   11:28:39  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:07:13.436   11:28:39  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:13.436   11:28:39  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:13.436   11:28:39  -- common/autotest_common.sh@10 -- # set +x
00:07:13.436  ************************************
00:07:13.436  START TEST spdkcli_tcp
00:07:13.436  ************************************
00:07:13.436   11:28:39 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:07:13.436  * Looking for test storage...
00:07:13.436  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:07:13.436    11:28:39 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:13.436     11:28:39 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version
00:07:13.436     11:28:39 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:13.695    11:28:39 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:13.695     11:28:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:13.695    11:28:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:07:13.696    11:28:39 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:13.696    11:28:39 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:13.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.696  		--rc genhtml_branch_coverage=1
00:07:13.696  		--rc genhtml_function_coverage=1
00:07:13.696  		--rc genhtml_legend=1
00:07:13.696  		--rc geninfo_all_blocks=1
00:07:13.696  		--rc geninfo_unexecuted_blocks=1
00:07:13.696  		
00:07:13.696  		'
00:07:13.696    11:28:39 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:13.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.696  		--rc genhtml_branch_coverage=1
00:07:13.696  		--rc genhtml_function_coverage=1
00:07:13.696  		--rc genhtml_legend=1
00:07:13.696  		--rc geninfo_all_blocks=1
00:07:13.696  		--rc geninfo_unexecuted_blocks=1
00:07:13.696  		
00:07:13.696  		'
00:07:13.696    11:28:39 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:13.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.696  		--rc genhtml_branch_coverage=1
00:07:13.696  		--rc genhtml_function_coverage=1
00:07:13.696  		--rc genhtml_legend=1
00:07:13.696  		--rc geninfo_all_blocks=1
00:07:13.696  		--rc geninfo_unexecuted_blocks=1
00:07:13.696  		
00:07:13.696  		'
00:07:13.696    11:28:39 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:13.696  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.696  		--rc genhtml_branch_coverage=1
00:07:13.696  		--rc genhtml_function_coverage=1
00:07:13.696  		--rc genhtml_legend=1
00:07:13.696  		--rc geninfo_all_blocks=1
00:07:13.696  		--rc geninfo_unexecuted_blocks=1
00:07:13.696  		
00:07:13.696  		'
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:07:13.696    11:28:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:07:13.696    11:28:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70114
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:07:13.696  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:13.696   11:28:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70114
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70114 ']'
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:13.696   11:28:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:13.696  [2024-12-16 11:28:39.692914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:13.696  [2024-12-16 11:28:39.693589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70114 ]
00:07:13.955  [2024-12-16 11:28:39.854498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:13.955  [2024-12-16 11:28:39.915511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:13.955  [2024-12-16 11:28:39.915645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:14.890   11:28:40 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:14.890   11:28:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0
00:07:14.890   11:28:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70131
00:07:14.890   11:28:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:07:14.890   11:28:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:07:14.890  [
00:07:14.890    "bdev_malloc_delete",
00:07:14.890    "bdev_malloc_create",
00:07:14.890    "bdev_null_resize",
00:07:14.890    "bdev_null_delete",
00:07:14.890    "bdev_null_create",
00:07:14.890    "bdev_nvme_cuse_unregister",
00:07:14.890    "bdev_nvme_cuse_register",
00:07:14.890    "bdev_opal_new_user",
00:07:14.890    "bdev_opal_set_lock_state",
00:07:14.890    "bdev_opal_delete",
00:07:14.890    "bdev_opal_get_info",
00:07:14.890    "bdev_opal_create",
00:07:14.890    "bdev_nvme_opal_revert",
00:07:14.890    "bdev_nvme_opal_init",
00:07:14.890    "bdev_nvme_send_cmd",
00:07:14.890    "bdev_nvme_set_keys",
00:07:14.890    "bdev_nvme_get_path_iostat",
00:07:14.890    "bdev_nvme_get_mdns_discovery_info",
00:07:14.890    "bdev_nvme_stop_mdns_discovery",
00:07:14.890    "bdev_nvme_start_mdns_discovery",
00:07:14.890    "bdev_nvme_set_multipath_policy",
00:07:14.890    "bdev_nvme_set_preferred_path",
00:07:14.890    "bdev_nvme_get_io_paths",
00:07:14.890    "bdev_nvme_remove_error_injection",
00:07:14.890    "bdev_nvme_add_error_injection",
00:07:14.890    "bdev_nvme_get_discovery_info",
00:07:14.891    "bdev_nvme_stop_discovery",
00:07:14.891    "bdev_nvme_start_discovery",
00:07:14.891    "bdev_nvme_get_controller_health_info",
00:07:14.891    "bdev_nvme_disable_controller",
00:07:14.891    "bdev_nvme_enable_controller",
00:07:14.891    "bdev_nvme_reset_controller",
00:07:14.891    "bdev_nvme_get_transport_statistics",
00:07:14.891    "bdev_nvme_apply_firmware",
00:07:14.891    "bdev_nvme_detach_controller",
00:07:14.891    "bdev_nvme_get_controllers",
00:07:14.891    "bdev_nvme_attach_controller",
00:07:14.891    "bdev_nvme_set_hotplug",
00:07:14.891    "bdev_nvme_set_options",
00:07:14.891    "bdev_passthru_delete",
00:07:14.891    "bdev_passthru_create",
00:07:14.891    "bdev_lvol_set_parent_bdev",
00:07:14.891    "bdev_lvol_set_parent",
00:07:14.891    "bdev_lvol_check_shallow_copy",
00:07:14.891    "bdev_lvol_start_shallow_copy",
00:07:14.891    "bdev_lvol_grow_lvstore",
00:07:14.891    "bdev_lvol_get_lvols",
00:07:14.891    "bdev_lvol_get_lvstores",
00:07:14.891    "bdev_lvol_delete",
00:07:14.891    "bdev_lvol_set_read_only",
00:07:14.891    "bdev_lvol_resize",
00:07:14.891    "bdev_lvol_decouple_parent",
00:07:14.891    "bdev_lvol_inflate",
00:07:14.891    "bdev_lvol_rename",
00:07:14.891    "bdev_lvol_clone_bdev",
00:07:14.891    "bdev_lvol_clone",
00:07:14.891    "bdev_lvol_snapshot",
00:07:14.891    "bdev_lvol_create",
00:07:14.891    "bdev_lvol_delete_lvstore",
00:07:14.891    "bdev_lvol_rename_lvstore",
00:07:14.891    "bdev_lvol_create_lvstore",
00:07:14.891    "bdev_raid_set_options",
00:07:14.891    "bdev_raid_remove_base_bdev",
00:07:14.891    "bdev_raid_add_base_bdev",
00:07:14.891    "bdev_raid_delete",
00:07:14.891    "bdev_raid_create",
00:07:14.891    "bdev_raid_get_bdevs",
00:07:14.891    "bdev_error_inject_error",
00:07:14.891    "bdev_error_delete",
00:07:14.891    "bdev_error_create",
00:07:14.891    "bdev_split_delete",
00:07:14.891    "bdev_split_create",
00:07:14.891    "bdev_delay_delete",
00:07:14.891    "bdev_delay_create",
00:07:14.891    "bdev_delay_update_latency",
00:07:14.891    "bdev_zone_block_delete",
00:07:14.891    "bdev_zone_block_create",
00:07:14.891    "blobfs_create",
00:07:14.891    "blobfs_detect",
00:07:14.891    "blobfs_set_cache_size",
00:07:14.891    "bdev_aio_delete",
00:07:14.891    "bdev_aio_rescan",
00:07:14.891    "bdev_aio_create",
00:07:14.891    "bdev_ftl_set_property",
00:07:14.891    "bdev_ftl_get_properties",
00:07:14.891    "bdev_ftl_get_stats",
00:07:14.891    "bdev_ftl_unmap",
00:07:14.891    "bdev_ftl_unload",
00:07:14.891    "bdev_ftl_delete",
00:07:14.891    "bdev_ftl_load",
00:07:14.891    "bdev_ftl_create",
00:07:14.891    "bdev_virtio_attach_controller",
00:07:14.891    "bdev_virtio_scsi_get_devices",
00:07:14.891    "bdev_virtio_detach_controller",
00:07:14.891    "bdev_virtio_blk_set_hotplug",
00:07:14.891    "bdev_iscsi_delete",
00:07:14.891    "bdev_iscsi_create",
00:07:14.891    "bdev_iscsi_set_options",
00:07:14.891    "accel_error_inject_error",
00:07:14.891    "ioat_scan_accel_module",
00:07:14.891    "dsa_scan_accel_module",
00:07:14.891    "iaa_scan_accel_module",
00:07:14.891    "keyring_file_remove_key",
00:07:14.891    "keyring_file_add_key",
00:07:14.891    "keyring_linux_set_options",
00:07:14.891    "fsdev_aio_delete",
00:07:14.891    "fsdev_aio_create",
00:07:14.891    "iscsi_get_histogram",
00:07:14.891    "iscsi_enable_histogram",
00:07:14.891    "iscsi_set_options",
00:07:14.891    "iscsi_get_auth_groups",
00:07:14.891    "iscsi_auth_group_remove_secret",
00:07:14.891    "iscsi_auth_group_add_secret",
00:07:14.891    "iscsi_delete_auth_group",
00:07:14.891    "iscsi_create_auth_group",
00:07:14.891    "iscsi_set_discovery_auth",
00:07:14.891    "iscsi_get_options",
00:07:14.891    "iscsi_target_node_request_logout",
00:07:14.891    "iscsi_target_node_set_redirect",
00:07:14.891    "iscsi_target_node_set_auth",
00:07:14.891    "iscsi_target_node_add_lun",
00:07:14.891    "iscsi_get_stats",
00:07:14.891    "iscsi_get_connections",
00:07:14.891    "iscsi_portal_group_set_auth",
00:07:14.891    "iscsi_start_portal_group",
00:07:14.891    "iscsi_delete_portal_group",
00:07:14.891    "iscsi_create_portal_group",
00:07:14.891    "iscsi_get_portal_groups",
00:07:14.891    "iscsi_delete_target_node",
00:07:14.891    "iscsi_target_node_remove_pg_ig_maps",
00:07:14.891    "iscsi_target_node_add_pg_ig_maps",
00:07:14.891    "iscsi_create_target_node",
00:07:14.891    "iscsi_get_target_nodes",
00:07:14.891    "iscsi_delete_initiator_group",
00:07:14.891    "iscsi_initiator_group_remove_initiators",
00:07:14.891    "iscsi_initiator_group_add_initiators",
00:07:14.891    "iscsi_create_initiator_group",
00:07:14.891    "iscsi_get_initiator_groups",
00:07:14.891    "nvmf_set_crdt",
00:07:14.891    "nvmf_set_config",
00:07:14.891    "nvmf_set_max_subsystems",
00:07:14.891    "nvmf_stop_mdns_prr",
00:07:14.891    "nvmf_publish_mdns_prr",
00:07:14.891    "nvmf_subsystem_get_listeners",
00:07:14.891    "nvmf_subsystem_get_qpairs",
00:07:14.891    "nvmf_subsystem_get_controllers",
00:07:14.891    "nvmf_get_stats",
00:07:14.891    "nvmf_get_transports",
00:07:14.891    "nvmf_create_transport",
00:07:14.891    "nvmf_get_targets",
00:07:14.891    "nvmf_delete_target",
00:07:14.891    "nvmf_create_target",
00:07:14.891    "nvmf_subsystem_allow_any_host",
00:07:14.891    "nvmf_subsystem_set_keys",
00:07:14.891    "nvmf_subsystem_remove_host",
00:07:14.891    "nvmf_subsystem_add_host",
00:07:14.891    "nvmf_ns_remove_host",
00:07:14.891    "nvmf_ns_add_host",
00:07:14.891    "nvmf_subsystem_remove_ns",
00:07:14.891    "nvmf_subsystem_set_ns_ana_group",
00:07:14.891    "nvmf_subsystem_add_ns",
00:07:14.891    "nvmf_subsystem_listener_set_ana_state",
00:07:14.891    "nvmf_discovery_get_referrals",
00:07:14.891    "nvmf_discovery_remove_referral",
00:07:14.891    "nvmf_discovery_add_referral",
00:07:14.891    "nvmf_subsystem_remove_listener",
00:07:14.891    "nvmf_subsystem_add_listener",
00:07:14.891    "nvmf_delete_subsystem",
00:07:14.891    "nvmf_create_subsystem",
00:07:14.891    "nvmf_get_subsystems",
00:07:14.891    "env_dpdk_get_mem_stats",
00:07:14.891    "nbd_get_disks",
00:07:14.891    "nbd_stop_disk",
00:07:14.891    "nbd_start_disk",
00:07:14.891    "ublk_recover_disk",
00:07:14.891    "ublk_get_disks",
00:07:14.891    "ublk_stop_disk",
00:07:14.891    "ublk_start_disk",
00:07:14.891    "ublk_destroy_target",
00:07:14.891    "ublk_create_target",
00:07:14.891    "virtio_blk_create_transport",
00:07:14.891    "virtio_blk_get_transports",
00:07:14.891    "vhost_controller_set_coalescing",
00:07:14.891    "vhost_get_controllers",
00:07:14.891    "vhost_delete_controller",
00:07:14.891    "vhost_create_blk_controller",
00:07:14.891    "vhost_scsi_controller_remove_target",
00:07:14.891    "vhost_scsi_controller_add_target",
00:07:14.891    "vhost_start_scsi_controller",
00:07:14.891    "vhost_create_scsi_controller",
00:07:14.891    "thread_set_cpumask",
00:07:14.891    "scheduler_set_options",
00:07:14.891    "framework_get_governor",
00:07:14.891    "framework_get_scheduler",
00:07:14.891    "framework_set_scheduler",
00:07:14.891    "framework_get_reactors",
00:07:14.891    "thread_get_io_channels",
00:07:14.891    "thread_get_pollers",
00:07:14.891    "thread_get_stats",
00:07:14.891    "framework_monitor_context_switch",
00:07:14.891    "spdk_kill_instance",
00:07:14.891    "log_enable_timestamps",
00:07:14.891    "log_get_flags",
00:07:14.891    "log_clear_flag",
00:07:14.891    "log_set_flag",
00:07:14.891    "log_get_level",
00:07:14.891    "log_set_level",
00:07:14.891    "log_get_print_level",
00:07:14.891    "log_set_print_level",
00:07:14.891    "framework_enable_cpumask_locks",
00:07:14.891    "framework_disable_cpumask_locks",
00:07:14.891    "framework_wait_init",
00:07:14.891    "framework_start_init",
00:07:14.891    "scsi_get_devices",
00:07:14.891    "bdev_get_histogram",
00:07:14.891    "bdev_enable_histogram",
00:07:14.891    "bdev_set_qos_limit",
00:07:14.891    "bdev_set_qd_sampling_period",
00:07:14.891    "bdev_get_bdevs",
00:07:14.891    "bdev_reset_iostat",
00:07:14.891    "bdev_get_iostat",
00:07:14.891    "bdev_examine",
00:07:14.891    "bdev_wait_for_examine",
00:07:14.891    "bdev_set_options",
00:07:14.891    "accel_get_stats",
00:07:14.891    "accel_set_options",
00:07:14.891    "accel_set_driver",
00:07:14.891    "accel_crypto_key_destroy",
00:07:14.891    "accel_crypto_keys_get",
00:07:14.891    "accel_crypto_key_create",
00:07:14.891    "accel_assign_opc",
00:07:14.891    "accel_get_module_info",
00:07:14.891    "accel_get_opc_assignments",
00:07:14.891    "vmd_rescan",
00:07:14.891    "vmd_remove_device",
00:07:14.891    "vmd_enable",
00:07:14.891    "sock_get_default_impl",
00:07:14.891    "sock_set_default_impl",
00:07:14.891    "sock_impl_set_options",
00:07:14.891    "sock_impl_get_options",
00:07:14.891    "iobuf_get_stats",
00:07:14.891    "iobuf_set_options",
00:07:14.891    "keyring_get_keys",
00:07:14.891    "framework_get_pci_devices",
00:07:14.891    "framework_get_config",
00:07:14.891    "framework_get_subsystems",
00:07:14.891    "fsdev_set_opts",
00:07:14.891    "fsdev_get_opts",
00:07:14.891    "trace_get_info",
00:07:14.891    "trace_get_tpoint_group_mask",
00:07:14.891    "trace_disable_tpoint_group",
00:07:14.891    "trace_enable_tpoint_group",
00:07:14.891    "trace_clear_tpoint_mask",
00:07:14.891    "trace_set_tpoint_mask",
00:07:14.891    "notify_get_notifications",
00:07:14.891    "notify_get_types",
00:07:14.891    "spdk_get_version",
00:07:14.892    "rpc_get_methods"
00:07:14.892  ]
00:07:14.892   11:28:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:14.892   11:28:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:07:14.892   11:28:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70114
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70114 ']'
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70114
00:07:14.892    11:28:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:14.892    11:28:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70114
00:07:14.892  killing process with pid 70114
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70114'
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70114
00:07:14.892   11:28:40 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70114
00:07:15.461  ************************************
00:07:15.461  END TEST spdkcli_tcp
00:07:15.461  ************************************
00:07:15.461  
00:07:15.461  real	0m1.969s
00:07:15.461  user	0m3.318s
00:07:15.461  sys	0m0.629s
00:07:15.461   11:28:41 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:15.461   11:28:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:15.461   11:28:41  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:15.461   11:28:41  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:15.461   11:28:41  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:15.461   11:28:41  -- common/autotest_common.sh@10 -- # set +x
00:07:15.461  ************************************
00:07:15.461  START TEST dpdk_mem_utility
00:07:15.461  ************************************
00:07:15.461   11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:15.461  * Looking for test storage...
00:07:15.461  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:07:15.461    11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:15.461     11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:15.461     11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version
00:07:15.720    11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:15.720     11:28:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:15.720    11:28:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:07:15.720    11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:15.720    11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:15.720  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.720  		--rc genhtml_branch_coverage=1
00:07:15.720  		--rc genhtml_function_coverage=1
00:07:15.720  		--rc genhtml_legend=1
00:07:15.720  		--rc geninfo_all_blocks=1
00:07:15.720  		--rc geninfo_unexecuted_blocks=1
00:07:15.720  		
00:07:15.720  		'
00:07:15.720    11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:15.720  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.720  		--rc genhtml_branch_coverage=1
00:07:15.720  		--rc genhtml_function_coverage=1
00:07:15.720  		--rc genhtml_legend=1
00:07:15.720  		--rc geninfo_all_blocks=1
00:07:15.720  		--rc geninfo_unexecuted_blocks=1
00:07:15.720  		
00:07:15.720  		'
00:07:15.720    11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:15.720  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.720  		--rc genhtml_branch_coverage=1
00:07:15.720  		--rc genhtml_function_coverage=1
00:07:15.720  		--rc genhtml_legend=1
00:07:15.720  		--rc geninfo_all_blocks=1
00:07:15.720  		--rc geninfo_unexecuted_blocks=1
00:07:15.720  		
00:07:15.720  		'
00:07:15.720    11:28:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:15.720  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.720  		--rc genhtml_branch_coverage=1
00:07:15.720  		--rc genhtml_function_coverage=1
00:07:15.720  		--rc genhtml_legend=1
00:07:15.720  		--rc geninfo_all_blocks=1
00:07:15.720  		--rc geninfo_unexecuted_blocks=1
00:07:15.720  		
00:07:15.720  		'
00:07:15.720   11:28:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:15.720   11:28:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70214
00:07:15.720   11:28:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:07:15.721  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:15.721   11:28:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70214
00:07:15.721   11:28:41 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70214 ']'
00:07:15.721   11:28:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:15.721   11:28:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:15.721   11:28:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:15.721   11:28:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:15.721   11:28:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:15.721  [2024-12-16 11:28:41.706863] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:15.721  [2024-12-16 11:28:41.707083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70214 ]
00:07:15.979  [2024-12-16 11:28:41.870641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:15.979  [2024-12-16 11:28:41.920409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:16.544   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:16.544   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0
00:07:16.544   11:28:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:07:16.544   11:28:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:07:16.544   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:16.544   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:16.544  {
00:07:16.544  "filename": "/tmp/spdk_mem_dump.txt"
00:07:16.544  }
00:07:16.544   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:16.544   11:28:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:07:16.804  DPDK memory size 860.000000 MiB in 1 heap(s)
00:07:16.804  1 heaps totaling size 860.000000 MiB
00:07:16.804    size:  860.000000 MiB heap id: 0
00:07:16.804  end heaps----------
00:07:16.804  9 mempools totaling size 642.649841 MiB
00:07:16.804    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:07:16.804    size:  158.602051 MiB name: PDU_data_out_Pool
00:07:16.804    size:   92.545471 MiB name: bdev_io_70214
00:07:16.804    size:   51.011292 MiB name: evtpool_70214
00:07:16.804    size:   50.003479 MiB name: msgpool_70214
00:07:16.804    size:   36.509338 MiB name: fsdev_io_70214
00:07:16.804    size:   21.763794 MiB name: PDU_Pool
00:07:16.804    size:   19.513306 MiB name: SCSI_TASK_Pool
00:07:16.804    size:    0.026123 MiB name: Session_Pool
00:07:16.804  end mempools-------
00:07:16.804  6 memzones totaling size 4.142822 MiB
00:07:16.804    size:    1.000366 MiB name: RG_ring_0_70214
00:07:16.804    size:    1.000366 MiB name: RG_ring_1_70214
00:07:16.804    size:    1.000366 MiB name: RG_ring_4_70214
00:07:16.804    size:    1.000366 MiB name: RG_ring_5_70214
00:07:16.804    size:    0.125366 MiB name: RG_ring_2_70214
00:07:16.804    size:    0.015991 MiB name: RG_ring_3_70214
00:07:16.804  end memzones-------
00:07:16.804   11:28:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:07:16.804  heap id: 0 total size: 860.000000 MiB number of busy elements: 298 number of free elements: 16
00:07:16.804    list of free elements. size: 13.938171 MiB
00:07:16.804      element at address: 0x200000400000 with size:    1.999512 MiB
00:07:16.804      element at address: 0x200000800000 with size:    1.996948 MiB
00:07:16.804      element at address: 0x20001bc00000 with size:    0.999878 MiB
00:07:16.804      element at address: 0x20001be00000 with size:    0.999878 MiB
00:07:16.804      element at address: 0x200034a00000 with size:    0.994446 MiB
00:07:16.804      element at address: 0x200009600000 with size:    0.959839 MiB
00:07:16.804      element at address: 0x200015e00000 with size:    0.954285 MiB
00:07:16.804      element at address: 0x20001c000000 with size:    0.936584 MiB
00:07:16.804      element at address: 0x200000200000 with size:    0.834839 MiB
00:07:16.804      element at address: 0x20001d800000 with size:    0.568237 MiB
00:07:16.804      element at address: 0x200003e00000 with size:    0.489563 MiB
00:07:16.804      element at address: 0x20000d800000 with size:    0.489258 MiB
00:07:16.804      element at address: 0x20001c200000 with size:    0.485657 MiB
00:07:16.804      element at address: 0x200007000000 with size:    0.480469 MiB
00:07:16.804      element at address: 0x20002ac00000 with size:    0.395752 MiB
00:07:16.804      element at address: 0x200003a00000 with size:    0.353027 MiB
00:07:16.804    list of standard malloc elements. size: 199.265137 MiB
00:07:16.804      element at address: 0x20000d9fff80 with size:  132.000122 MiB
00:07:16.804      element at address: 0x2000097fff80 with size:   64.000122 MiB
00:07:16.804      element at address: 0x20001bcfff80 with size:    1.000122 MiB
00:07:16.804      element at address: 0x20001befff80 with size:    1.000122 MiB
00:07:16.804      element at address: 0x20001c0fff80 with size:    1.000122 MiB
00:07:16.804      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:07:16.804      element at address: 0x20001c0eff00 with size:    0.062622 MiB
00:07:16.804      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:07:16.804      element at address: 0x20001c0efdc0 with size:    0.000305 MiB
00:07:16.804      element at address: 0x2000002d5b80 with size:    0.000183 MiB
00:07:16.804      element at address: 0x2000002d5c40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d5d00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d5dc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d5e80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d5f40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6000 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d60c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6180 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6240 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6300 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d63c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6480 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6540 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6600 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d66c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d68c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6980 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6a40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6b00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6bc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6c80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6d40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6e00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6ec0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d6f80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7040 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7100 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d71c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7280 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7340 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7400 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d74c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7580 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7640 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7700 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d77c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7880 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7940 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7a00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7ac0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7b80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a5a600 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a5a800 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a5eac0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7ed80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7ee40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7ef00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7efc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f080 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f140 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f200 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f2c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f380 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f440 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f500 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003a7f5c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003aff880 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003affa80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003affb40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7d540 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7d600 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7d6c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7d780 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7d840 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7d900 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7d9c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7da80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7db40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7dc00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7dcc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7dd80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7de40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7df00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7dfc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e080 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e140 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e200 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e2c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e380 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e440 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e500 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e5c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e680 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e740 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e800 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e8c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7e980 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7ea40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7eb00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7ebc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7ec80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7ed40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003e7ee00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200003eff0c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b000 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b0c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b180 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b240 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b300 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b3c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b480 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b540 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b600 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000707b6c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000070fb980 with size:    0.000183 MiB
00:07:16.805      element at address: 0x2000096fdd80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d400 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d4c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d580 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d640 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d700 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d7c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d880 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87d940 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87da00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d87dac0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20000d8fdd80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x200015ef44c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001c0efc40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001c0efd00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001c2bc740 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891780 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891840 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891900 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d8919c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891a80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891b40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891c00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891cc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891d80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891e40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891f00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d891fc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892080 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892140 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892200 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d8922c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892380 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892440 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892500 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d8925c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892680 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892740 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892800 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d8928c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892980 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892a40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892b00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892bc0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892c80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892d40 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892e00 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892ec0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d892f80 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d893040 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d893100 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d8931c0 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d893280 with size:    0.000183 MiB
00:07:16.805      element at address: 0x20001d893340 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893400 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d8934c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893580 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893640 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893700 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d8937c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893880 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893940 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893a00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893ac0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893b80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893c40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893d00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893dc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893e80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d893f40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894000 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d8940c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894180 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894240 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894300 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d8943c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894480 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894540 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894600 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d8946c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894780 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894840 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894900 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d8949c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894a80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894b40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894c00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894cc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894d80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894e40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894f00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d894fc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d895080 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d895140 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d895200 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d8952c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d895380 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20001d895440 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac65500 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac655c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c1c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c3c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c480 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c540 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c600 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c6c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c780 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c840 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c900 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6c9c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ca80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6cb40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6cc00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ccc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6cd80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ce40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6cf00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6cfc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d080 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d140 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d200 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d2c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d380 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d440 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d500 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d5c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d680 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d740 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d800 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d8c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6d980 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6da40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6db00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6dbc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6dc80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6dd40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6de00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6dec0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6df80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e040 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e100 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e1c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e280 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e340 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e400 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e4c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e580 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e640 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e700 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e7c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e880 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6e940 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ea00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6eac0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6eb80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ec40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ed00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6edc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ee80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ef40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f000 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f0c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f180 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f240 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f300 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f3c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f480 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f540 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f600 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f6c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f780 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f840 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f900 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6f9c0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6fa80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6fb40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6fc00 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6fcc0 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6fd80 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6fe40 with size:    0.000183 MiB
00:07:16.806      element at address: 0x20002ac6ff00 with size:    0.000183 MiB
00:07:16.806    list of memzone associated elements. size: 646.796692 MiB
00:07:16.806      element at address: 0x20001d895500 with size:  211.416748 MiB
00:07:16.806        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:07:16.806      element at address: 0x20002ac6ffc0 with size:  157.562561 MiB
00:07:16.806        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:07:16.806      element at address: 0x200015ff4780 with size:   92.045044 MiB
00:07:16.806        associated memzone info: size:   92.044922 MiB name: MP_bdev_io_70214_0
00:07:16.806      element at address: 0x2000009ff380 with size:   48.003052 MiB
00:07:16.806        associated memzone info: size:   48.002930 MiB name: MP_evtpool_70214_0
00:07:16.806      element at address: 0x200003fff380 with size:   48.003052 MiB
00:07:16.806        associated memzone info: size:   48.002930 MiB name: MP_msgpool_70214_0
00:07:16.806      element at address: 0x2000071fdb80 with size:   36.008911 MiB
00:07:16.806        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_70214_0
00:07:16.806      element at address: 0x20001c3be940 with size:   20.255554 MiB
00:07:16.806        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:07:16.806      element at address: 0x200034bfeb40 with size:   18.005066 MiB
00:07:16.806        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:07:16.806      element at address: 0x2000005ffe00 with size:    2.000488 MiB
00:07:16.806        associated memzone info: size:    2.000366 MiB name: RG_MP_evtpool_70214
00:07:16.806      element at address: 0x200003bffe00 with size:    2.000488 MiB
00:07:16.806        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_70214
00:07:16.806      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:07:16.806        associated memzone info: size:    1.007996 MiB name: MP_evtpool_70214
00:07:16.806      element at address: 0x20000d8fde40 with size:    1.008118 MiB
00:07:16.806        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:07:16.806      element at address: 0x20001c2bc800 with size:    1.008118 MiB
00:07:16.806        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:07:16.806      element at address: 0x2000096fde40 with size:    1.008118 MiB
00:07:16.806        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:07:16.806      element at address: 0x2000070fba40 with size:    1.008118 MiB
00:07:16.807        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:07:16.807      element at address: 0x200003eff180 with size:    1.000488 MiB
00:07:16.807        associated memzone info: size:    1.000366 MiB name: RG_ring_0_70214
00:07:16.807      element at address: 0x200003affc00 with size:    1.000488 MiB
00:07:16.807        associated memzone info: size:    1.000366 MiB name: RG_ring_1_70214
00:07:16.807      element at address: 0x200015ef4580 with size:    1.000488 MiB
00:07:16.807        associated memzone info: size:    1.000366 MiB name: RG_ring_4_70214
00:07:16.807      element at address: 0x200034afe940 with size:    1.000488 MiB
00:07:16.807        associated memzone info: size:    1.000366 MiB name: RG_ring_5_70214
00:07:16.807      element at address: 0x200003a7f680 with size:    0.500488 MiB
00:07:16.807        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_70214
00:07:16.807      element at address: 0x200003e7eec0 with size:    0.500488 MiB
00:07:16.807        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_70214
00:07:16.807      element at address: 0x20000d87db80 with size:    0.500488 MiB
00:07:16.807        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:07:16.807      element at address: 0x20000707b780 with size:    0.500488 MiB
00:07:16.807        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:07:16.807      element at address: 0x20001c27c540 with size:    0.250488 MiB
00:07:16.807        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:07:16.807      element at address: 0x200003a5eb80 with size:    0.125488 MiB
00:07:16.807        associated memzone info: size:    0.125366 MiB name: RG_ring_2_70214
00:07:16.807      element at address: 0x2000096f5b80 with size:    0.031738 MiB
00:07:16.807        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:07:16.807      element at address: 0x20002ac65680 with size:    0.023743 MiB
00:07:16.807        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:07:16.807      element at address: 0x200003a5a8c0 with size:    0.016113 MiB
00:07:16.807        associated memzone info: size:    0.015991 MiB name: RG_ring_3_70214
00:07:16.807      element at address: 0x20002ac6b7c0 with size:    0.002441 MiB
00:07:16.807        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:07:16.807      element at address: 0x2000002d6780 with size:    0.000305 MiB
00:07:16.807        associated memzone info: size:    0.000183 MiB name: MP_msgpool_70214
00:07:16.807      element at address: 0x200003aff940 with size:    0.000305 MiB
00:07:16.807        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_70214
00:07:16.807      element at address: 0x200003a5a6c0 with size:    0.000305 MiB
00:07:16.807        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_70214
00:07:16.807      element at address: 0x20002ac6c280 with size:    0.000305 MiB
00:07:16.807        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:07:16.807   11:28:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:07:16.807   11:28:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70214
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70214 ']'
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70214
00:07:16.807    11:28:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:16.807    11:28:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70214
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70214'
00:07:16.807  killing process with pid 70214
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70214
00:07:16.807   11:28:42 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70214
00:07:17.375  
00:07:17.375  real	0m1.760s
00:07:17.375  user	0m1.718s
00:07:17.375  sys	0m0.530s
00:07:17.375   11:28:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:17.375   11:28:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:17.375  ************************************
00:07:17.375  END TEST dpdk_mem_utility
00:07:17.375  ************************************
00:07:17.375   11:28:43  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:17.375   11:28:43  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:17.375   11:28:43  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:17.375   11:28:43  -- common/autotest_common.sh@10 -- # set +x
00:07:17.375  ************************************
00:07:17.375  START TEST event
00:07:17.375  ************************************
00:07:17.375   11:28:43 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:07:17.375  * Looking for test storage...
00:07:17.375  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:17.375    11:28:43 event -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:17.375     11:28:43 event -- common/autotest_common.sh@1681 -- # lcov --version
00:07:17.375     11:28:43 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:17.375    11:28:43 event -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:17.375    11:28:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:17.375    11:28:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:17.375    11:28:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:17.375    11:28:43 event -- scripts/common.sh@336 -- # IFS=.-:
00:07:17.375    11:28:43 event -- scripts/common.sh@336 -- # read -ra ver1
00:07:17.375    11:28:43 event -- scripts/common.sh@337 -- # IFS=.-:
00:07:17.375    11:28:43 event -- scripts/common.sh@337 -- # read -ra ver2
00:07:17.375    11:28:43 event -- scripts/common.sh@338 -- # local 'op=<'
00:07:17.375    11:28:43 event -- scripts/common.sh@340 -- # ver1_l=2
00:07:17.375    11:28:43 event -- scripts/common.sh@341 -- # ver2_l=1
00:07:17.376    11:28:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:17.376    11:28:43 event -- scripts/common.sh@344 -- # case "$op" in
00:07:17.376    11:28:43 event -- scripts/common.sh@345 -- # : 1
00:07:17.376    11:28:43 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:17.376    11:28:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:17.376     11:28:43 event -- scripts/common.sh@365 -- # decimal 1
00:07:17.376     11:28:43 event -- scripts/common.sh@353 -- # local d=1
00:07:17.376     11:28:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:17.376     11:28:43 event -- scripts/common.sh@355 -- # echo 1
00:07:17.376    11:28:43 event -- scripts/common.sh@365 -- # ver1[v]=1
00:07:17.376     11:28:43 event -- scripts/common.sh@366 -- # decimal 2
00:07:17.376     11:28:43 event -- scripts/common.sh@353 -- # local d=2
00:07:17.376     11:28:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:17.376     11:28:43 event -- scripts/common.sh@355 -- # echo 2
00:07:17.376    11:28:43 event -- scripts/common.sh@366 -- # ver2[v]=2
00:07:17.376    11:28:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:17.376    11:28:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:17.376    11:28:43 event -- scripts/common.sh@368 -- # return 0
00:07:17.376    11:28:43 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:17.376    11:28:43 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:17.376  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.376  		--rc genhtml_branch_coverage=1
00:07:17.376  		--rc genhtml_function_coverage=1
00:07:17.376  		--rc genhtml_legend=1
00:07:17.376  		--rc geninfo_all_blocks=1
00:07:17.376  		--rc geninfo_unexecuted_blocks=1
00:07:17.376  		
00:07:17.376  		'
00:07:17.376    11:28:43 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:17.376  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.376  		--rc genhtml_branch_coverage=1
00:07:17.376  		--rc genhtml_function_coverage=1
00:07:17.376  		--rc genhtml_legend=1
00:07:17.376  		--rc geninfo_all_blocks=1
00:07:17.376  		--rc geninfo_unexecuted_blocks=1
00:07:17.376  		
00:07:17.376  		'
00:07:17.376    11:28:43 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:17.376  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.376  		--rc genhtml_branch_coverage=1
00:07:17.376  		--rc genhtml_function_coverage=1
00:07:17.376  		--rc genhtml_legend=1
00:07:17.376  		--rc geninfo_all_blocks=1
00:07:17.376  		--rc geninfo_unexecuted_blocks=1
00:07:17.376  		
00:07:17.376  		'
00:07:17.376    11:28:43 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:17.376  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.376  		--rc genhtml_branch_coverage=1
00:07:17.376  		--rc genhtml_function_coverage=1
00:07:17.376  		--rc genhtml_legend=1
00:07:17.376  		--rc geninfo_all_blocks=1
00:07:17.376  		--rc geninfo_unexecuted_blocks=1
00:07:17.376  		
00:07:17.376  		'
00:07:17.376   11:28:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:07:17.376    11:28:43 event -- bdev/nbd_common.sh@6 -- # set -e
00:07:17.635   11:28:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:17.636   11:28:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']'
00:07:17.636   11:28:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:17.636   11:28:43 event -- common/autotest_common.sh@10 -- # set +x
00:07:17.636  ************************************
00:07:17.636  START TEST event_perf
00:07:17.636  ************************************
00:07:17.636   11:28:43 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:17.636  Running I/O for 1 seconds...[2024-12-16 11:28:43.493688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:17.636  [2024-12-16 11:28:43.493835] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70300 ]
00:07:17.636  [2024-12-16 11:28:43.655747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:17.894  [2024-12-16 11:28:43.712920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:17.894  [2024-12-16 11:28:43.713117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2
00:07:17.894  [2024-12-16 11:28:43.713201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:17.894  Running I/O for 1 seconds...[2024-12-16 11:28:43.713328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3
00:07:18.832  
00:07:18.832  lcore  0:   183738
00:07:18.832  lcore  1:   183737
00:07:18.832  lcore  2:   183737
00:07:18.832  lcore  3:   183738
00:07:18.832  done.
00:07:18.832  
00:07:18.832  real	0m1.369s
00:07:18.832  user	0m4.109s
00:07:18.832  sys	0m0.125s
00:07:18.832   11:28:44 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:18.832   11:28:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:07:18.832  ************************************
00:07:18.832  END TEST event_perf
00:07:18.832  ************************************
00:07:18.832   11:28:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:18.832   11:28:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:07:18.832   11:28:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:18.832   11:28:44 event -- common/autotest_common.sh@10 -- # set +x
00:07:18.832  ************************************
00:07:18.832  START TEST event_reactor
00:07:18.832  ************************************
00:07:18.832   11:28:44 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:07:19.092  [2024-12-16 11:28:44.929971] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:19.092  [2024-12-16 11:28:44.930199] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70334 ]
00:07:19.092  [2024-12-16 11:28:45.079797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:19.092  [2024-12-16 11:28:45.135975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:20.470  test_start
00:07:20.471  oneshot
00:07:20.471  tick 100
00:07:20.471  tick 100
00:07:20.471  tick 250
00:07:20.471  tick 100
00:07:20.471  tick 100
00:07:20.471  tick 100
00:07:20.471  tick 250
00:07:20.471  tick 500
00:07:20.471  tick 100
00:07:20.471  tick 100
00:07:20.471  tick 250
00:07:20.471  tick 100
00:07:20.471  tick 100
00:07:20.471  test_end
00:07:20.471  
00:07:20.471  real	0m1.348s
00:07:20.471  user	0m1.138s
00:07:20.471  sys	0m0.101s
00:07:20.471  ************************************
00:07:20.471  END TEST event_reactor
00:07:20.471  ************************************
00:07:20.471   11:28:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:20.471   11:28:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:07:20.471   11:28:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:20.471   11:28:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:07:20.471   11:28:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:20.471   11:28:46 event -- common/autotest_common.sh@10 -- # set +x
00:07:20.471  ************************************
00:07:20.471  START TEST event_reactor_perf
00:07:20.471  ************************************
00:07:20.471   11:28:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:20.471  [2024-12-16 11:28:46.336017] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:20.471  [2024-12-16 11:28:46.336226] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70365 ]
00:07:20.471  [2024-12-16 11:28:46.498153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:20.734  [2024-12-16 11:28:46.549924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:21.677  test_start
00:07:21.677  test_end
00:07:21.677  Performance:   347115 events per second
00:07:21.677  
00:07:21.677  real	0m1.357s
00:07:21.677  user	0m1.150s
00:07:21.677  sys	0m0.098s
00:07:21.677   11:28:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:21.677  ************************************
00:07:21.677  END TEST event_reactor_perf
00:07:21.677  ************************************
00:07:21.677   11:28:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:07:21.677    11:28:47 event -- event/event.sh@49 -- # uname -s
00:07:21.677   11:28:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:07:21.677   11:28:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:21.677   11:28:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:21.677   11:28:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:21.677   11:28:47 event -- common/autotest_common.sh@10 -- # set +x
00:07:21.677  ************************************
00:07:21.677  START TEST event_scheduler
00:07:21.677  ************************************
00:07:21.677   11:28:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:07:21.937  * Looking for test storage...
00:07:21.937  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:07:21.937    11:28:47 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:21.937     11:28:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version
00:07:21.937     11:28:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:21.937    11:28:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:21.937     11:28:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:21.937    11:28:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:07:21.937    11:28:47 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:21.937    11:28:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:21.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.937  		--rc genhtml_branch_coverage=1
00:07:21.937  		--rc genhtml_function_coverage=1
00:07:21.937  		--rc genhtml_legend=1
00:07:21.937  		--rc geninfo_all_blocks=1
00:07:21.937  		--rc geninfo_unexecuted_blocks=1
00:07:21.937  		
00:07:21.937  		'
00:07:21.937    11:28:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:21.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.937  		--rc genhtml_branch_coverage=1
00:07:21.937  		--rc genhtml_function_coverage=1
00:07:21.937  		--rc genhtml_legend=1
00:07:21.937  		--rc geninfo_all_blocks=1
00:07:21.937  		--rc geninfo_unexecuted_blocks=1
00:07:21.937  		
00:07:21.937  		'
00:07:21.937    11:28:47 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:21.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.937  		--rc genhtml_branch_coverage=1
00:07:21.937  		--rc genhtml_function_coverage=1
00:07:21.937  		--rc genhtml_legend=1
00:07:21.937  		--rc geninfo_all_blocks=1
00:07:21.937  		--rc geninfo_unexecuted_blocks=1
00:07:21.937  		
00:07:21.937  		'
00:07:21.937    11:28:47 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:21.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:21.937  		--rc genhtml_branch_coverage=1
00:07:21.937  		--rc genhtml_function_coverage=1
00:07:21.937  		--rc genhtml_legend=1
00:07:21.937  		--rc geninfo_all_blocks=1
00:07:21.937  		--rc geninfo_unexecuted_blocks=1
00:07:21.937  		
00:07:21.937  		'
00:07:21.937   11:28:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:07:21.937   11:28:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70441
00:07:21.937   11:28:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:07:21.937   11:28:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:07:21.937   11:28:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70441
00:07:21.937   11:28:47 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70441 ']'
00:07:21.937   11:28:47 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:21.937   11:28:47 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:21.937   11:28:47 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:21.937  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:21.937   11:28:47 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:21.937   11:28:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:22.196  [2024-12-16 11:28:48.022927] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:22.197  [2024-12-16 11:28:48.023129] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70441 ]
00:07:22.197  [2024-12-16 11:28:48.186034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:22.197  [2024-12-16 11:28:48.239739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:22.197  [2024-12-16 11:28:48.239959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:22.197  [2024-12-16 11:28:48.239970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2
00:07:22.197  [2024-12-16 11:28:48.240102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3
00:07:23.134   11:28:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0
00:07:23.135   11:28:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:23.135  POWER: Cannot set governor of lcore 0 to userspace
00:07:23.135  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:23.135  POWER: Cannot set governor of lcore 0 to performance
00:07:23.135  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:23.135  POWER: Cannot set governor of lcore 0 to userspace
00:07:23.135  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:23.135  POWER: Cannot set governor of lcore 0 to userspace
00:07:23.135  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:07:23.135  POWER: Unable to set Power Management Environment for lcore 0
00:07:23.135  [2024-12-16 11:28:48.876693] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0
00:07:23.135  [2024-12-16 11:28:48.876719] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0
00:07:23.135  [2024-12-16 11:28:48.876735] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:07:23.135  [2024-12-16 11:28:48.876777] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:07:23.135  [2024-12-16 11:28:48.876787] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:07:23.135  [2024-12-16 11:28:48.876798] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  [2024-12-16 11:28:48.948899] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:23.135   11:28:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  ************************************
00:07:23.135  START TEST scheduler_create_thread
00:07:23.135  ************************************
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  2
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  3
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  4
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  5
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  6
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  7
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  8
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  9
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:23.135  10
00:07:23.135   11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.135    11:28:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:07:23.135    11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.135    11:28:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:24.514    11:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:24.514   11:28:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:07:24.514   11:28:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:07:24.514   11:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:24.514   11:28:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:25.453   11:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:25.453    11:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:07:25.453    11:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:25.453    11:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:26.022    11:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:26.282   11:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:07:26.282   11:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:07:26.282   11:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:26.282   11:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:26.851  ************************************
00:07:26.851  END TEST scheduler_create_thread
00:07:26.851  ************************************
00:07:26.851   11:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:26.851  
00:07:26.851  real	0m3.883s
00:07:26.851  user	0m0.025s
00:07:26.851  sys	0m0.013s
00:07:26.851   11:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:26.851   11:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:26.851   11:28:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:26.851   11:28:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70441
00:07:26.851   11:28:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70441 ']'
00:07:26.851   11:28:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70441
00:07:26.851    11:28:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname
00:07:26.851   11:28:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:26.851    11:28:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70441
00:07:27.111  killing process with pid 70441
00:07:27.111   11:28:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2
00:07:27.111   11:28:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']'
00:07:27.111   11:28:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70441'
00:07:27.111   11:28:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70441
00:07:27.111   11:28:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70441
00:07:27.371  [2024-12-16 11:28:53.224426] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:07:27.631  ************************************
00:07:27.631  END TEST event_scheduler
00:07:27.631  ************************************
00:07:27.631  
00:07:27.631  real	0m5.832s
00:07:27.631  user	0m12.039s
00:07:27.631  sys	0m0.492s
00:07:27.631   11:28:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:27.631   11:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:27.631   11:28:53 event -- event/event.sh@51 -- # modprobe -n nbd
00:07:27.631   11:28:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:07:27.631   11:28:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:27.631   11:28:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:27.631   11:28:53 event -- common/autotest_common.sh@10 -- # set +x
00:07:27.631  ************************************
00:07:27.631  START TEST app_repeat
00:07:27.631  ************************************
00:07:27.631   11:28:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70547
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:07:27.631  Process app_repeat pid: 70547
00:07:27.631  spdk_app_start Round 0
00:07:27.631  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70547'
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70547 /var/tmp/spdk-nbd.sock
00:07:27.631   11:28:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:07:27.631   11:28:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70547 ']'
00:07:27.631   11:28:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:27.631   11:28:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:27.631   11:28:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:27.631   11:28:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:27.631   11:28:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:27.631  [2024-12-16 11:28:53.678535] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:27.631  [2024-12-16 11:28:53.678688] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70547 ]
00:07:27.890  [2024-12-16 11:28:53.842599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:27.890  [2024-12-16 11:28:53.901334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:27.890  [2024-12-16 11:28:53.901425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:28.828   11:28:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:28.828   11:28:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0
00:07:28.828   11:28:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:28.828  Malloc0
00:07:28.828   11:28:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:29.087  Malloc1
00:07:29.087   11:28:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:29.087   11:28:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:29.346  /dev/nbd0
00:07:29.346    11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:29.346   11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@873 -- # break
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:07:29.346   11:28:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:29.346  1+0 records in
00:07:29.346  1+0 records out
00:07:29.346  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282623 s, 14.5 MB/s
00:07:29.346    11:28:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:29.607   11:28:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096
00:07:29.607   11:28:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:29.607   11:28:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:07:29.607   11:28:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0
00:07:29.607   11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:29.607   11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:29.607   11:28:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:29.866  /dev/nbd1
00:07:29.866    11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:29.866   11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@873 -- # break
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:29.866  1+0 records in
00:07:29.866  1+0 records out
00:07:29.866  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387005 s, 10.6 MB/s
00:07:29.866    11:28:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:07:29.866   11:28:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0
00:07:29.866   11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:29.866   11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:29.866    11:28:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:29.866    11:28:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:29.866     11:28:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:30.126    11:28:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:30.126    {
00:07:30.126      "nbd_device": "/dev/nbd0",
00:07:30.126      "bdev_name": "Malloc0"
00:07:30.126    },
00:07:30.126    {
00:07:30.126      "nbd_device": "/dev/nbd1",
00:07:30.126      "bdev_name": "Malloc1"
00:07:30.126    }
00:07:30.126  ]'
00:07:30.126     11:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:30.126    {
00:07:30.126      "nbd_device": "/dev/nbd0",
00:07:30.126      "bdev_name": "Malloc0"
00:07:30.126    },
00:07:30.126    {
00:07:30.126      "nbd_device": "/dev/nbd1",
00:07:30.126      "bdev_name": "Malloc1"
00:07:30.126    }
00:07:30.126  ]'
00:07:30.126     11:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:30.126    11:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:30.126  /dev/nbd1'
00:07:30.126     11:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:30.126     11:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:30.126  /dev/nbd1'
00:07:30.126    11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:30.126    11:28:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:30.126  256+0 records in
00:07:30.126  256+0 records out
00:07:30.126  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138759 s, 75.6 MB/s
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:30.126  256+0 records in
00:07:30.126  256+0 records out
00:07:30.126  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023952 s, 43.8 MB/s
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:30.126  256+0 records in
00:07:30.126  256+0 records out
00:07:30.126  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022786 s, 46.0 MB/s
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:30.126   11:28:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:30.385    11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:30.385   11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:30.385   11:28:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:30.385   11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:30.385   11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:30.385   11:28:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:30.385   11:28:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:30.385   11:28:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:30.386   11:28:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:30.386   11:28:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:30.645    11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:30.645   11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:30.645   11:28:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:30.645   11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:30.645   11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:30.645   11:28:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:30.645   11:28:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:30.645   11:28:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:30.645    11:28:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:30.645    11:28:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.645     11:28:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:30.905    11:28:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:30.905     11:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:30.905     11:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:30.905    11:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:30.905     11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:30.905     11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:30.905     11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:30.905    11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:30.905    11:28:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:30.905   11:28:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:30.905   11:28:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:30.905   11:28:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:30.905   11:28:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:31.164   11:28:57 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:31.424  [2024-12-16 11:28:57.288240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:31.424  [2024-12-16 11:28:57.333120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:31.424  [2024-12-16 11:28:57.333125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:31.424  [2024-12-16 11:28:57.375150] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:31.424  [2024-12-16 11:28:57.375207] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:34.711  spdk_app_start Round 1
00:07:34.711  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:34.711   11:29:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:34.712   11:29:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:07:34.712   11:29:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70547 /var/tmp/spdk-nbd.sock
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70547 ']'
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:34.712   11:29:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0
00:07:34.712   11:29:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:34.712  Malloc0
00:07:34.712   11:29:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:34.712  Malloc1
00:07:34.970   11:29:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:34.970   11:29:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:34.970  /dev/nbd0
00:07:35.229    11:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:35.229   11:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@873 -- # break
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:35.229  1+0 records in
00:07:35.229  1+0 records out
00:07:35.229  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619123 s, 6.6 MB/s
00:07:35.229    11:29:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:07:35.229   11:29:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0
00:07:35.229   11:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:35.229   11:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:35.230   11:29:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:35.230  /dev/nbd1
00:07:35.489    11:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:35.489   11:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@873 -- # break
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:35.489  1+0 records in
00:07:35.489  1+0 records out
00:07:35.489  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396045 s, 10.3 MB/s
00:07:35.489    11:29:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:07:35.489   11:29:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0
00:07:35.489   11:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:35.489   11:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:35.489    11:29:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:35.489    11:29:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:35.489     11:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:35.747    11:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:35.747    {
00:07:35.747      "nbd_device": "/dev/nbd0",
00:07:35.747      "bdev_name": "Malloc0"
00:07:35.747    },
00:07:35.747    {
00:07:35.747      "nbd_device": "/dev/nbd1",
00:07:35.747      "bdev_name": "Malloc1"
00:07:35.747    }
00:07:35.747  ]'
00:07:35.747     11:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:35.747    {
00:07:35.747      "nbd_device": "/dev/nbd0",
00:07:35.747      "bdev_name": "Malloc0"
00:07:35.747    },
00:07:35.747    {
00:07:35.747      "nbd_device": "/dev/nbd1",
00:07:35.747      "bdev_name": "Malloc1"
00:07:35.747    }
00:07:35.747  ]'
00:07:35.747     11:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:35.747    11:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:35.747  /dev/nbd1'
00:07:35.747     11:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:35.747  /dev/nbd1'
00:07:35.747     11:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:35.747    11:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:35.747    11:29:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:35.747  256+0 records in
00:07:35.747  256+0 records out
00:07:35.747  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543041 s, 193 MB/s
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:35.747  256+0 records in
00:07:35.747  256+0 records out
00:07:35.747  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234134 s, 44.8 MB/s
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:35.747  256+0 records in
00:07:35.747  256+0 records out
00:07:35.747  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284599 s, 36.8 MB/s
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:35.747   11:29:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:36.006    11:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:36.006   11:29:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:36.265    11:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:36.265   11:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:36.265   11:29:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:36.265   11:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:36.265   11:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:36.265   11:29:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:36.265   11:29:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:36.265   11:29:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:36.265    11:29:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:36.265    11:29:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:36.265     11:29:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:36.524    11:29:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:36.524     11:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:36.524     11:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:36.524    11:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:36.524     11:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:36.524     11:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:36.524     11:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:36.524    11:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:36.524    11:29:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:36.524   11:29:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:36.524   11:29:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:36.524   11:29:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:36.524   11:29:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:36.784   11:29:02 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:37.043  [2024-12-16 11:29:02.968397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:37.043  [2024-12-16 11:29:03.019119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:37.043  [2024-12-16 11:29:03.019153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:37.043  [2024-12-16 11:29:03.062691] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:37.043  [2024-12-16 11:29:03.062763] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:40.337  spdk_app_start Round 2
00:07:40.337  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:40.337   11:29:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:40.337   11:29:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:07:40.337   11:29:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70547 /var/tmp/spdk-nbd.sock
00:07:40.337   11:29:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70547 ']'
00:07:40.337   11:29:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:40.337   11:29:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:40.337   11:29:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:40.337   11:29:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:40.337   11:29:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:40.337   11:29:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:40.337   11:29:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0
00:07:40.337   11:29:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:40.337  Malloc0
00:07:40.337   11:29:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:40.597  Malloc1
00:07:40.597   11:29:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:40.597   11:29:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:40.857  /dev/nbd0
00:07:40.857    11:29:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:40.857   11:29:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@873 -- # break
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:40.857  1+0 records in
00:07:40.857  1+0 records out
00:07:40.857  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469245 s, 8.7 MB/s
00:07:40.857    11:29:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:07:40.857   11:29:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0
00:07:40.857   11:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:40.857   11:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:40.857   11:29:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:41.117  /dev/nbd1
00:07:41.117    11:29:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:41.117   11:29:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@873 -- # break
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:41.117  1+0 records in
00:07:41.117  1+0 records out
00:07:41.117  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221496 s, 18.5 MB/s
00:07:41.117    11:29:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:07:41.117   11:29:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0
00:07:41.117   11:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:41.117   11:29:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:41.117    11:29:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:41.117    11:29:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:41.117     11:29:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:41.377    11:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:41.377    {
00:07:41.377      "nbd_device": "/dev/nbd0",
00:07:41.377      "bdev_name": "Malloc0"
00:07:41.377    },
00:07:41.377    {
00:07:41.377      "nbd_device": "/dev/nbd1",
00:07:41.377      "bdev_name": "Malloc1"
00:07:41.377    }
00:07:41.377  ]'
00:07:41.377     11:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:41.377     11:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:41.377    {
00:07:41.377      "nbd_device": "/dev/nbd0",
00:07:41.377      "bdev_name": "Malloc0"
00:07:41.377    },
00:07:41.377    {
00:07:41.377      "nbd_device": "/dev/nbd1",
00:07:41.377      "bdev_name": "Malloc1"
00:07:41.377    }
00:07:41.377  ]'
00:07:41.377    11:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:41.377  /dev/nbd1'
00:07:41.377     11:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:41.377     11:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:41.377  /dev/nbd1'
00:07:41.377    11:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:41.377    11:29:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:41.377  256+0 records in
00:07:41.377  256+0 records out
00:07:41.377  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141511 s, 74.1 MB/s
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:41.377  256+0 records in
00:07:41.377  256+0 records out
00:07:41.377  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206114 s, 50.9 MB/s
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:41.377  256+0 records in
00:07:41.377  256+0 records out
00:07:41.377  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251652 s, 41.7 MB/s
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:41.377   11:29:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:41.637    11:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:41.637   11:29:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:41.897    11:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:41.897   11:29:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:41.897   11:29:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:41.897   11:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:41.897   11:29:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:41.897   11:29:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:41.897   11:29:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:41.897   11:29:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:41.897    11:29:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:41.897    11:29:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:41.897     11:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:42.157    11:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:42.157     11:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:42.157     11:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:42.157    11:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:42.157     11:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:42.157     11:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:42.157     11:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:42.157    11:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:42.157    11:29:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:42.157   11:29:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:42.157   11:29:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:42.157   11:29:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:42.157   11:29:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:42.417   11:29:08 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:42.676  [2024-12-16 11:29:08.512883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:42.676  [2024-12-16 11:29:08.560808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:42.676  [2024-12-16 11:29:08.560811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:07:42.676  [2024-12-16 11:29:08.603338] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:42.676  [2024-12-16 11:29:08.603403] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:45.986  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:45.986   11:29:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70547 /var/tmp/spdk-nbd.sock
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70547 ']'
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0
00:07:45.986   11:29:11 event.app_repeat -- event/event.sh@39 -- # killprocess 70547
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70547 ']'
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70547
00:07:45.986    11:29:11 event.app_repeat -- common/autotest_common.sh@955 -- # uname
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:45.986    11:29:11 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70547
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70547'
00:07:45.986  killing process with pid 70547
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70547
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70547
00:07:45.986  spdk_app_start is called in Round 0.
00:07:45.986  Shutdown signal received, stop current app iteration
00:07:45.986  Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization...
00:07:45.986  spdk_app_start is called in Round 1.
00:07:45.986  Shutdown signal received, stop current app iteration
00:07:45.986  Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization...
00:07:45.986  spdk_app_start is called in Round 2.
00:07:45.986  Shutdown signal received, stop current app iteration
00:07:45.986  Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization...
00:07:45.986  spdk_app_start is called in Round 3.
00:07:45.986  Shutdown signal received, stop current app iteration
00:07:45.986   11:29:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:07:45.986   11:29:11 event.app_repeat -- event/event.sh@42 -- # return 0
00:07:45.986  
00:07:45.986  real	0m18.207s
00:07:45.986  user	0m40.395s
00:07:45.986  sys	0m2.879s
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:45.986   11:29:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:45.986  ************************************
00:07:45.986  END TEST app_repeat
00:07:45.986  ************************************
00:07:45.986   11:29:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:07:45.986   11:29:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:45.986   11:29:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:45.986   11:29:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:45.986   11:29:11 event -- common/autotest_common.sh@10 -- # set +x
00:07:45.986  ************************************
00:07:45.986  START TEST cpu_locks
00:07:45.986  ************************************
00:07:45.986   11:29:11 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:45.986  * Looking for test storage...
00:07:45.986  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:45.986    11:29:11 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:07:45.986     11:29:11 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version
00:07:45.986     11:29:11 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:07:46.246    11:29:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:46.246     11:29:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:46.246    11:29:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:07:46.246    11:29:12 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:46.246    11:29:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:07:46.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.246  		--rc genhtml_branch_coverage=1
00:07:46.246  		--rc genhtml_function_coverage=1
00:07:46.246  		--rc genhtml_legend=1
00:07:46.246  		--rc geninfo_all_blocks=1
00:07:46.246  		--rc geninfo_unexecuted_blocks=1
00:07:46.246  		
00:07:46.246  		'
00:07:46.246    11:29:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:07:46.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.246  		--rc genhtml_branch_coverage=1
00:07:46.246  		--rc genhtml_function_coverage=1
00:07:46.246  		--rc genhtml_legend=1
00:07:46.246  		--rc geninfo_all_blocks=1
00:07:46.246  		--rc geninfo_unexecuted_blocks=1
00:07:46.246  		
00:07:46.246  		'
00:07:46.246    11:29:12 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:07:46.246  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.246  		--rc genhtml_branch_coverage=1
00:07:46.246  		--rc genhtml_function_coverage=1
00:07:46.246  		--rc genhtml_legend=1
00:07:46.246  		--rc geninfo_all_blocks=1
00:07:46.246  		--rc geninfo_unexecuted_blocks=1
00:07:46.247  		
00:07:46.247  		'
00:07:46.247    11:29:12 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:07:46.247  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.247  		--rc genhtml_branch_coverage=1
00:07:46.247  		--rc genhtml_function_coverage=1
00:07:46.247  		--rc genhtml_legend=1
00:07:46.247  		--rc geninfo_all_blocks=1
00:07:46.247  		--rc geninfo_unexecuted_blocks=1
00:07:46.247  		
00:07:46.247  		'
00:07:46.247   11:29:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:07:46.247   11:29:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:07:46.247   11:29:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:07:46.247   11:29:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:07:46.247   11:29:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:46.247   11:29:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:46.247   11:29:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:46.247  ************************************
00:07:46.247  START TEST default_locks
00:07:46.247  ************************************
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70985
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70985
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70985 ']'
00:07:46.247  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:46.247   11:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:46.247  [2024-12-16 11:29:12.201289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:46.247  [2024-12-16 11:29:12.201410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70985 ]
00:07:46.507  [2024-12-16 11:29:12.349712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.507  [2024-12-16 11:29:12.397799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:47.075   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:47.075   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0
00:07:47.075   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70985
00:07:47.075   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70985
00:07:47.075   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:47.334   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70985
00:07:47.334   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70985 ']'
00:07:47.334   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70985
00:07:47.334    11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname
00:07:47.334   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:47.334    11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70985
00:07:47.593  killing process with pid 70985
00:07:47.593   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:47.593   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:47.593   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70985'
00:07:47.593   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70985
00:07:47.593   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70985
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70985
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70985
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:47.852    11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70985
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70985 ']'
00:07:47.852  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:47.852  ERROR: process (pid: 70985) is no longer running
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:47.852  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70985) - No such process
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:47.852  
00:07:47.852  real	0m1.707s
00:07:47.852  user	0m1.680s
00:07:47.852  sys	0m0.584s
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:47.852   11:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:47.852  ************************************
00:07:47.852  END TEST default_locks
00:07:47.852  ************************************
00:07:47.852   11:29:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:07:47.852   11:29:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:47.852   11:29:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:47.852   11:29:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:47.852  ************************************
00:07:47.852  START TEST default_locks_via_rpc
00:07:47.852  ************************************
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71039
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71039
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71039 ']'
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:47.852  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:47.852   11:29:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:48.113  [2024-12-16 11:29:13.971729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:48.113  [2024-12-16 11:29:13.971950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71039 ]
00:07:48.113  [2024-12-16 11:29:14.135759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:48.372  [2024-12-16 11:29:14.183362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71039
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71039
00:07:48.941   11:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71039
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71039 ']'
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71039
00:07:49.510    11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:49.510    11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71039
00:07:49.510  killing process with pid 71039
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71039'
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71039
00:07:49.510   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71039
00:07:49.769  
00:07:49.769  real	0m1.922s
00:07:49.769  user	0m1.908s
00:07:49.769  sys	0m0.686s
00:07:49.769  ************************************
00:07:49.769  END TEST default_locks_via_rpc
00:07:49.769  ************************************
00:07:49.769   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:49.769   11:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:50.029   11:29:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:07:50.029   11:29:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:50.029   11:29:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:50.029   11:29:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:50.029  ************************************
00:07:50.029  START TEST non_locking_app_on_locked_coremask
00:07:50.029  ************************************
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71086
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71086 /var/tmp/spdk.sock
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71086 ']'
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:50.029  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:50.029   11:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:50.029  [2024-12-16 11:29:15.957908] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:50.029  [2024-12-16 11:29:15.958028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71086 ]
00:07:50.289  [2024-12-16 11:29:16.116015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:50.289  [2024-12-16 11:29:16.166522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71102
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71102 /var/tmp/spdk2.sock
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71102 ']'
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:50.857  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:50.857   11:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:50.857  [2024-12-16 11:29:16.877104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:50.857  [2024-12-16 11:29:16.877312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71102 ]
00:07:51.116  [2024-12-16 11:29:17.032549] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:51.116  [2024-12-16 11:29:17.032613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:51.116  [2024-12-16 11:29:17.135768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:51.684   11:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:51.684   11:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0
00:07:51.684   11:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71086
00:07:51.684   11:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71086
00:07:51.684   11:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:52.251   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71086
00:07:52.251   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71086 ']'
00:07:52.251   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71086
00:07:52.251    11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname
00:07:52.251   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:52.251    11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71086
00:07:52.509   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:52.509   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:52.509  killing process with pid 71086
00:07:52.509   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71086'
00:07:52.509   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71086
00:07:52.509   11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71086
00:07:53.077   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71102
00:07:53.077   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71102 ']'
00:07:53.077   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71102
00:07:53.077    11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname
00:07:53.077   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:53.077    11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71102
00:07:53.335  killing process with pid 71102
00:07:53.335   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:53.335   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:53.335   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71102'
00:07:53.335   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71102
00:07:53.335   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71102
00:07:53.592  
00:07:53.592  real	0m3.669s
00:07:53.592  user	0m3.855s
00:07:53.592  sys	0m1.121s
00:07:53.592   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:53.592  ************************************
00:07:53.592  END TEST non_locking_app_on_locked_coremask
00:07:53.592  ************************************
00:07:53.592   11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:53.592   11:29:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:07:53.592   11:29:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:53.592   11:29:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:53.592   11:29:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:53.592  ************************************
00:07:53.592  START TEST locking_app_on_unlocked_coremask
00:07:53.592  ************************************
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71171
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:07:53.592  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71171 /var/tmp/spdk.sock
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71171 ']'
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:53.592   11:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:53.851  [2024-12-16 11:29:19.684056] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:53.851  [2024-12-16 11:29:19.684292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71171 ]
00:07:53.851  [2024-12-16 11:29:19.830958] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:53.851  [2024-12-16 11:29:19.831021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:53.851  [2024-12-16 11:29:19.881888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:54.789   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:54.789   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0
00:07:54.789   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71187
00:07:54.789   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71187 /var/tmp/spdk2.sock
00:07:54.790   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:54.790   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71187 ']'
00:07:54.790   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:54.790   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:54.790   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:54.790  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:54.790   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:54.790   11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:54.790  [2024-12-16 11:29:20.621120] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:54.790  [2024-12-16 11:29:20.621377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71187 ]
00:07:54.790  [2024-12-16 11:29:20.777427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:55.049  [2024-12-16 11:29:20.878621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:55.615   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:55.615   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0
00:07:55.615   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71187
00:07:55.615   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71187
00:07:55.615   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:55.875   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71171
00:07:55.875   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71171 ']'
00:07:55.875   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71171
00:07:55.875    11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname
00:07:55.875   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:56.134    11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71171
00:07:56.134  killing process with pid 71171
00:07:56.134   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:56.134   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:56.134   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71171'
00:07:56.134   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71171
00:07:56.134   11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71171
00:07:56.701   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71187
00:07:56.701   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71187 ']'
00:07:56.701   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71187
00:07:56.701    11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname
00:07:56.701   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:56.701    11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71187
00:07:56.960   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:56.960   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:56.960   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71187'
00:07:56.960  killing process with pid 71187
00:07:56.960   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71187
00:07:56.960   11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71187
00:07:57.217  
00:07:57.217  real	0m3.582s
00:07:57.217  user	0m3.790s
00:07:57.217  sys	0m1.099s
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:57.217  ************************************
00:07:57.217  END TEST locking_app_on_unlocked_coremask
00:07:57.217  ************************************
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:57.217   11:29:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:07:57.217   11:29:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:57.217   11:29:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:57.217   11:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:57.217  ************************************
00:07:57.217  START TEST locking_app_on_locked_coremask
00:07:57.217  ************************************
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71247
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71247 /var/tmp/spdk.sock
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71247 ']'
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:57.217  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:57.217   11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:57.475  [2024-12-16 11:29:23.346335] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:57.475  [2024-12-16 11:29:23.346544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71247 ]
00:07:57.475  [2024-12-16 11:29:23.505842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:57.734  [2024-12-16 11:29:23.553154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71263
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71263 /var/tmp/spdk2.sock
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71263 /var/tmp/spdk2.sock
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:58.301    11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71263 /var/tmp/spdk2.sock
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71263 ']'
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:58.301  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:58.301   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:58.301  [2024-12-16 11:29:24.264662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:58.301  [2024-12-16 11:29:24.264890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71263 ]
00:07:58.559  [2024-12-16 11:29:24.414866] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71247 has claimed it.
00:07:58.559  [2024-12-16 11:29:24.414932] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:59.211  ERROR: process (pid: 71263) is no longer running
00:07:59.211  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71263) - No such process
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71247
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71247
00:07:59.211   11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71247
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71247 ']'
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71247
00:07:59.468    11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:07:59.468    11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71247
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71247'
00:07:59.468  killing process with pid 71247
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71247
00:07:59.468   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71247
00:07:59.727  
00:07:59.727  real	0m2.495s
00:07:59.727  user	0m2.719s
00:07:59.727  sys	0m0.722s
00:07:59.727   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable
00:07:59.727   11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:59.727  ************************************
00:07:59.727  END TEST locking_app_on_locked_coremask
00:07:59.727  ************************************
00:07:59.727   11:29:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:07:59.727   11:29:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:07:59.727   11:29:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable
00:07:59.727   11:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:59.985  ************************************
00:07:59.985  START TEST locking_overlapped_coremask
00:07:59.985  ************************************
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71316
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71316 /var/tmp/spdk.sock
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71316 ']'
00:07:59.985  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:07:59.985   11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:59.986  [2024-12-16 11:29:25.894666] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:07:59.986  [2024-12-16 11:29:25.894820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71316 ]
00:08:00.243  [2024-12-16 11:29:26.056981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:00.243  [2024-12-16 11:29:26.110615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:08:00.243  [2024-12-16 11:29:26.110707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:00.243  [2024-12-16 11:29:26.110809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71334
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71334 /var/tmp/spdk2.sock
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71334 /var/tmp/spdk2.sock
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:00.810    11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71334 /var/tmp/spdk2.sock
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71334 ']'
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:00.810  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:00.810   11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:00.810  [2024-12-16 11:29:26.862432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:00.810  [2024-12-16 11:29:26.863032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71334 ]
00:08:01.068  [2024-12-16 11:29:27.019605] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71316 has claimed it.
00:08:01.068  [2024-12-16 11:29:27.019712] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:01.634  ERROR: process (pid: 71334) is no longer running
00:08:01.634  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71334) - No such process
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71316
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71316 ']'
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71316
00:08:01.634    11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:01.634    11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71316
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71316'
00:08:01.634  killing process with pid 71316
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71316
00:08:01.634   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71316
00:08:01.892  
00:08:01.892  real	0m2.153s
00:08:01.892  user	0m5.745s
00:08:01.892  sys	0m0.563s
00:08:01.892   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:01.892   11:29:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:01.892  ************************************
00:08:01.892  END TEST locking_overlapped_coremask
00:08:01.892  ************************************
00:08:02.151   11:29:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:08:02.151   11:29:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:08:02.151   11:29:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:02.151   11:29:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:02.151  ************************************
00:08:02.151  START TEST locking_overlapped_coremask_via_rpc
00:08:02.151  ************************************
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71376
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71376 /var/tmp/spdk.sock
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71376 ']'
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:02.151  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:02.151   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:02.151  [2024-12-16 11:29:28.114496] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:02.151  [2024-12-16 11:29:28.114725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71376 ]
00:08:02.409  [2024-12-16 11:29:28.265868] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:02.409  [2024-12-16 11:29:28.265929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:02.409  [2024-12-16 11:29:28.318654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:08:02.409  [2024-12-16 11:29:28.318697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:02.409  [2024-12-16 11:29:28.318816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2
00:08:02.976  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71394
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71394 /var/tmp/spdk2.sock
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71394 ']'
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:02.976   11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:02.976  [2024-12-16 11:29:29.025149] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:02.976  [2024-12-16 11:29:29.025277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71394 ]
00:08:03.243  [2024-12-16 11:29:29.184135] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:03.244  [2024-12-16 11:29:29.184211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:03.244  [2024-12-16 11:29:29.294644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3
00:08:03.244  [2024-12-16 11:29:29.298655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2
00:08:03.244  [2024-12-16 11:29:29.298688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:04.193    11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:04.193  [2024-12-16 11:29:29.928748] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71376 has claimed it.
00:08:04.193  request:
00:08:04.193  {
00:08:04.193  "method": "framework_enable_cpumask_locks",
00:08:04.193  "req_id": 1
00:08:04.193  }
00:08:04.193  Got JSON-RPC error response
00:08:04.193  response:
00:08:04.193  {
00:08:04.193  "code": -32603,
00:08:04.193  "message": "Failed to claim CPU core: 2"
00:08:04.193  }
00:08:04.193  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71376 /var/tmp/spdk.sock
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71376 ']'
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:04.193   11:29:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:04.193  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71394 /var/tmp/spdk2.sock
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71394 ']'
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:04.193   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:04.451  ************************************
00:08:04.451  END TEST locking_overlapped_coremask_via_rpc
00:08:04.451  ************************************
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:04.451  
00:08:04.451  real	0m2.387s
00:08:04.451  user	0m1.123s
00:08:04.451  sys	0m0.188s
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:04.451   11:29:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:04.451   11:29:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:08:04.451   11:29:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71376 ]]
00:08:04.451   11:29:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71376
00:08:04.451   11:29:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71376 ']'
00:08:04.451   11:29:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71376
00:08:04.451    11:29:30 event.cpu_locks -- common/autotest_common.sh@955 -- # uname
00:08:04.451   11:29:30 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:04.451    11:29:30 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71376
00:08:04.451  killing process with pid 71376
00:08:04.451   11:29:30 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:04.452   11:29:30 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:04.452   11:29:30 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71376'
00:08:04.452   11:29:30 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71376
00:08:04.452   11:29:30 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71376
00:08:05.019   11:29:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71394 ]]
00:08:05.019   11:29:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71394
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71394 ']'
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71394
00:08:05.019    11:29:30 event.cpu_locks -- common/autotest_common.sh@955 -- # uname
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:05.019    11:29:30 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71394
00:08:05.019  killing process with pid 71394
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']'
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71394'
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71394
00:08:05.019   11:29:30 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71394
00:08:05.277   11:29:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:08:05.277   11:29:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:08:05.277  Process with pid 71376 is not found
00:08:05.277  Process with pid 71394 is not found
00:08:05.277   11:29:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71376 ]]
00:08:05.277   11:29:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71376
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71376 ']'
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71376
00:08:05.277  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71376) - No such process
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71376 is not found'
00:08:05.277   11:29:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71394 ]]
00:08:05.277   11:29:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71394
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71394 ']'
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71394
00:08:05.277  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71394) - No such process
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71394 is not found'
00:08:05.277   11:29:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:08:05.277  
00:08:05.277  real	0m19.455s
00:08:05.277  user	0m32.490s
00:08:05.277  sys	0m6.059s
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:05.277   11:29:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:05.277  ************************************
00:08:05.277  END TEST cpu_locks
00:08:05.277  ************************************
00:08:05.535  
00:08:05.535  real	0m48.174s
00:08:05.535  user	1m31.557s
00:08:05.535  sys	0m10.140s
00:08:05.535   11:29:31 event -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:05.535   11:29:31 event -- common/autotest_common.sh@10 -- # set +x
00:08:05.535  ************************************
00:08:05.535  END TEST event
00:08:05.535  ************************************
00:08:05.536   11:29:31  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:08:05.536   11:29:31  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:08:05.536   11:29:31  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:05.536   11:29:31  -- common/autotest_common.sh@10 -- # set +x
00:08:05.536  ************************************
00:08:05.536  START TEST thread
00:08:05.536  ************************************
00:08:05.536   11:29:31 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:08:05.536  * Looking for test storage...
00:08:05.536  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:08:05.536    11:29:31 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:08:05.536     11:29:31 thread -- common/autotest_common.sh@1681 -- # lcov --version
00:08:05.536     11:29:31 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:08:05.794    11:29:31 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:08:05.794    11:29:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:05.794    11:29:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:05.794    11:29:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:05.794    11:29:31 thread -- scripts/common.sh@336 -- # IFS=.-:
00:08:05.794    11:29:31 thread -- scripts/common.sh@336 -- # read -ra ver1
00:08:05.794    11:29:31 thread -- scripts/common.sh@337 -- # IFS=.-:
00:08:05.794    11:29:31 thread -- scripts/common.sh@337 -- # read -ra ver2
00:08:05.794    11:29:31 thread -- scripts/common.sh@338 -- # local 'op=<'
00:08:05.794    11:29:31 thread -- scripts/common.sh@340 -- # ver1_l=2
00:08:05.794    11:29:31 thread -- scripts/common.sh@341 -- # ver2_l=1
00:08:05.794    11:29:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:05.794    11:29:31 thread -- scripts/common.sh@344 -- # case "$op" in
00:08:05.794    11:29:31 thread -- scripts/common.sh@345 -- # : 1
00:08:05.794    11:29:31 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:05.794    11:29:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:05.794     11:29:31 thread -- scripts/common.sh@365 -- # decimal 1
00:08:05.794     11:29:31 thread -- scripts/common.sh@353 -- # local d=1
00:08:05.794     11:29:31 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:05.794     11:29:31 thread -- scripts/common.sh@355 -- # echo 1
00:08:05.794    11:29:31 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:08:05.794     11:29:31 thread -- scripts/common.sh@366 -- # decimal 2
00:08:05.794     11:29:31 thread -- scripts/common.sh@353 -- # local d=2
00:08:05.794     11:29:31 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:05.794     11:29:31 thread -- scripts/common.sh@355 -- # echo 2
00:08:05.794    11:29:31 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:08:05.794    11:29:31 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:05.794    11:29:31 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:05.794    11:29:31 thread -- scripts/common.sh@368 -- # return 0
00:08:05.794    11:29:31 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:05.794    11:29:31 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:08:05.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.794  		--rc genhtml_branch_coverage=1
00:08:05.794  		--rc genhtml_function_coverage=1
00:08:05.794  		--rc genhtml_legend=1
00:08:05.794  		--rc geninfo_all_blocks=1
00:08:05.794  		--rc geninfo_unexecuted_blocks=1
00:08:05.794  		
00:08:05.794  		'
00:08:05.794    11:29:31 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:08:05.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.794  		--rc genhtml_branch_coverage=1
00:08:05.794  		--rc genhtml_function_coverage=1
00:08:05.794  		--rc genhtml_legend=1
00:08:05.794  		--rc geninfo_all_blocks=1
00:08:05.794  		--rc geninfo_unexecuted_blocks=1
00:08:05.794  		
00:08:05.794  		'
00:08:05.794    11:29:31 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:08:05.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.794  		--rc genhtml_branch_coverage=1
00:08:05.794  		--rc genhtml_function_coverage=1
00:08:05.795  		--rc genhtml_legend=1
00:08:05.795  		--rc geninfo_all_blocks=1
00:08:05.795  		--rc geninfo_unexecuted_blocks=1
00:08:05.795  		
00:08:05.795  		'
00:08:05.795    11:29:31 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:08:05.795  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:05.795  		--rc genhtml_branch_coverage=1
00:08:05.795  		--rc genhtml_function_coverage=1
00:08:05.795  		--rc genhtml_legend=1
00:08:05.795  		--rc geninfo_all_blocks=1
00:08:05.795  		--rc geninfo_unexecuted_blocks=1
00:08:05.795  		
00:08:05.795  		'
00:08:05.795   11:29:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:05.795   11:29:31 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']'
00:08:05.795   11:29:31 thread -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:05.795   11:29:31 thread -- common/autotest_common.sh@10 -- # set +x
00:08:05.795  ************************************
00:08:05.795  START TEST thread_poller_perf
00:08:05.795  ************************************
00:08:05.795   11:29:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:05.795  [2024-12-16 11:29:31.718405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:05.795  [2024-12-16 11:29:31.718610] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71521 ]
00:08:06.053  [2024-12-16 11:29:31.879676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:06.053  [2024-12-16 11:29:31.925867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:06.053  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:08:06.999  
[2024-12-16T11:29:33.066Z]  ======================================
00:08:06.999  
[2024-12-16T11:29:33.066Z]  busy:2301467174 (cyc)
00:08:06.999  
[2024-12-16T11:29:33.066Z]  total_run_count: 390000
00:08:06.999  
[2024-12-16T11:29:33.066Z]  tsc_hz: 2290000000 (cyc)
00:08:06.999  
[2024-12-16T11:29:33.066Z]  ======================================
00:08:06.999  
[2024-12-16T11:29:33.066Z]  poller_cost: 5901 (cyc), 2576 (nsec)
00:08:06.999  
00:08:06.999  real	0m1.353s
00:08:06.999  user	0m1.146s
00:08:06.999  sys	0m0.100s
00:08:06.999   11:29:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:06.999   11:29:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:08:06.999  ************************************
00:08:06.999  END TEST thread_poller_perf
00:08:06.999  ************************************
00:08:07.259   11:29:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:07.259   11:29:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']'
00:08:07.259   11:29:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:07.259   11:29:33 thread -- common/autotest_common.sh@10 -- # set +x
00:08:07.259  ************************************
00:08:07.259  START TEST thread_poller_perf
00:08:07.259  ************************************
00:08:07.259   11:29:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:07.259  [2024-12-16 11:29:33.134613] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:07.259  [2024-12-16 11:29:33.134723] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71563 ]
00:08:07.259  [2024-12-16 11:29:33.295412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:07.516  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:08:07.516  [2024-12-16 11:29:33.339757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:08.450  
[2024-12-16T11:29:34.517Z]  ======================================
00:08:08.450  
[2024-12-16T11:29:34.517Z]  busy:2293625600 (cyc)
00:08:08.450  
[2024-12-16T11:29:34.517Z]  total_run_count: 5145000
00:08:08.450  
[2024-12-16T11:29:34.517Z]  tsc_hz: 2290000000 (cyc)
00:08:08.450  
[2024-12-16T11:29:34.517Z]  ======================================
00:08:08.450  
[2024-12-16T11:29:34.517Z]  poller_cost: 445 (cyc), 194 (nsec)
00:08:08.450  
00:08:08.450  real	0m1.343s
00:08:08.450  user	0m1.139s
00:08:08.450  sys	0m0.099s
00:08:08.450   11:29:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:08.450   11:29:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:08:08.450  ************************************
00:08:08.450  END TEST thread_poller_perf
00:08:08.450  ************************************
00:08:08.450   11:29:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:08:08.450  ************************************
00:08:08.450  END TEST thread
00:08:08.450  ************************************
00:08:08.450  
00:08:08.450  real	0m3.040s
00:08:08.450  user	0m2.457s
00:08:08.450  sys	0m0.379s
00:08:08.451   11:29:34 thread -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:08.451   11:29:34 thread -- common/autotest_common.sh@10 -- # set +x
00:08:08.709   11:29:34  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:08:08.709   11:29:34  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:08:08.709   11:29:34  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:08:08.709   11:29:34  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:08.709   11:29:34  -- common/autotest_common.sh@10 -- # set +x
00:08:08.709  ************************************
00:08:08.709  START TEST app_cmdline
00:08:08.709  ************************************
00:08:08.709   11:29:34 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:08:08.709  * Looking for test storage...
00:08:08.709  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:08:08.709    11:29:34 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:08:08.709     11:29:34 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version
00:08:08.709     11:29:34 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:08:08.967    11:29:34 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@345 -- # : 1
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:08.967     11:29:34 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:08.967    11:29:34 app_cmdline -- scripts/common.sh@368 -- # return 0
00:08:08.967    11:29:34 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:08.967    11:29:34 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:08:08.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:08.967  		--rc genhtml_branch_coverage=1
00:08:08.967  		--rc genhtml_function_coverage=1
00:08:08.967  		--rc genhtml_legend=1
00:08:08.967  		--rc geninfo_all_blocks=1
00:08:08.967  		--rc geninfo_unexecuted_blocks=1
00:08:08.967  		
00:08:08.967  		'
00:08:08.967    11:29:34 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:08:08.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:08.967  		--rc genhtml_branch_coverage=1
00:08:08.967  		--rc genhtml_function_coverage=1
00:08:08.967  		--rc genhtml_legend=1
00:08:08.967  		--rc geninfo_all_blocks=1
00:08:08.967  		--rc geninfo_unexecuted_blocks=1
00:08:08.967  		
00:08:08.967  		'
00:08:08.967    11:29:34 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:08:08.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:08.967  		--rc genhtml_branch_coverage=1
00:08:08.967  		--rc genhtml_function_coverage=1
00:08:08.967  		--rc genhtml_legend=1
00:08:08.967  		--rc geninfo_all_blocks=1
00:08:08.967  		--rc geninfo_unexecuted_blocks=1
00:08:08.967  		
00:08:08.967  		'
00:08:08.967    11:29:34 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:08:08.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:08.967  		--rc genhtml_branch_coverage=1
00:08:08.967  		--rc genhtml_function_coverage=1
00:08:08.967  		--rc genhtml_legend=1
00:08:08.967  		--rc geninfo_all_blocks=1
00:08:08.967  		--rc geninfo_unexecuted_blocks=1
00:08:08.967  		
00:08:08.967  		'
00:08:08.967   11:29:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:08:08.967   11:29:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71646
00:08:08.967   11:29:34 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:08:08.967   11:29:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71646
00:08:08.967   11:29:34 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71646 ']'
00:08:08.967   11:29:34 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:08.967   11:29:34 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:08.967   11:29:34 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:08.967  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:08.967   11:29:34 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:08.967   11:29:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:08.967  [2024-12-16 11:29:34.898081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:08.967  [2024-12-16 11:29:34.898282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ]
00:08:09.225  [2024-12-16 11:29:35.059175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:09.225  [2024-12-16 11:29:35.107157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:09.791   11:29:35 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:09.791   11:29:35 app_cmdline -- common/autotest_common.sh@864 -- # return 0
00:08:09.791   11:29:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:08:10.050  {
00:08:10.050    "version": "SPDK v24.09.1-pre git sha1 b18e1bd62",
00:08:10.050    "fields": {
00:08:10.050      "major": 24,
00:08:10.050      "minor": 9,
00:08:10.050      "patch": 1,
00:08:10.050      "suffix": "-pre",
00:08:10.050      "commit": "b18e1bd62"
00:08:10.050    }
00:08:10.050  }
00:08:10.050   11:29:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:08:10.050   11:29:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:08:10.050   11:29:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:08:10.050   11:29:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:08:10.050    11:29:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:08:10.050    11:29:35 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:10.050    11:29:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:08:10.050    11:29:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:10.050    11:29:35 app_cmdline -- app/cmdline.sh@26 -- # sort
00:08:10.050    11:29:35 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:10.050   11:29:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:08:10.050   11:29:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:08:10.050   11:29:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:10.050   11:29:35 app_cmdline -- common/autotest_common.sh@650 -- # local es=0
00:08:10.050   11:29:35 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:10.050   11:29:35 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:10.050   11:29:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:10.050    11:29:35 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:10.050   11:29:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:10.050    11:29:35 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:10.050   11:29:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:10.050   11:29:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:10.050   11:29:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:08:10.050   11:29:36 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:10.309  request:
00:08:10.309  {
00:08:10.309    "method": "env_dpdk_get_mem_stats",
00:08:10.309    "req_id": 1
00:08:10.309  }
00:08:10.309  Got JSON-RPC error response
00:08:10.309  response:
00:08:10.309  {
00:08:10.309    "code": -32601,
00:08:10.309    "message": "Method not found"
00:08:10.309  }
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:10.309   11:29:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71646
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71646 ']'
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71646
00:08:10.309    11:29:36 app_cmdline -- common/autotest_common.sh@955 -- # uname
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:10.309    11:29:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71646
00:08:10.309  killing process with pid 71646
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71646'
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 71646
00:08:10.309   11:29:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 71646
00:08:10.878  ************************************
00:08:10.878  END TEST app_cmdline
00:08:10.878  ************************************
00:08:10.878  
00:08:10.878  real	0m2.087s
00:08:10.878  user	0m2.348s
00:08:10.878  sys	0m0.568s
00:08:10.878   11:29:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:10.878   11:29:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:10.878   11:29:36  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:08:10.878   11:29:36  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:08:10.878   11:29:36  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:10.878   11:29:36  -- common/autotest_common.sh@10 -- # set +x
00:08:10.878  ************************************
00:08:10.878  START TEST version
00:08:10.878  ************************************
00:08:10.878   11:29:36 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:08:10.878  * Looking for test storage...
00:08:10.878  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:08:10.878    11:29:36 version -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:08:10.878     11:29:36 version -- common/autotest_common.sh@1681 -- # lcov --version
00:08:10.878     11:29:36 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:08:10.878    11:29:36 version -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:08:10.878    11:29:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:10.878    11:29:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:10.878    11:29:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:10.878    11:29:36 version -- scripts/common.sh@336 -- # IFS=.-:
00:08:10.878    11:29:36 version -- scripts/common.sh@336 -- # read -ra ver1
00:08:10.878    11:29:36 version -- scripts/common.sh@337 -- # IFS=.-:
00:08:10.878    11:29:36 version -- scripts/common.sh@337 -- # read -ra ver2
00:08:10.878    11:29:36 version -- scripts/common.sh@338 -- # local 'op=<'
00:08:10.878    11:29:36 version -- scripts/common.sh@340 -- # ver1_l=2
00:08:10.878    11:29:36 version -- scripts/common.sh@341 -- # ver2_l=1
00:08:10.878    11:29:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:10.878    11:29:36 version -- scripts/common.sh@344 -- # case "$op" in
00:08:10.878    11:29:36 version -- scripts/common.sh@345 -- # : 1
00:08:10.878    11:29:36 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:10.878    11:29:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:10.878     11:29:36 version -- scripts/common.sh@365 -- # decimal 1
00:08:10.878     11:29:36 version -- scripts/common.sh@353 -- # local d=1
00:08:10.878     11:29:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:10.878     11:29:36 version -- scripts/common.sh@355 -- # echo 1
00:08:10.878    11:29:36 version -- scripts/common.sh@365 -- # ver1[v]=1
00:08:10.878     11:29:36 version -- scripts/common.sh@366 -- # decimal 2
00:08:10.878     11:29:36 version -- scripts/common.sh@353 -- # local d=2
00:08:10.878     11:29:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:10.878     11:29:36 version -- scripts/common.sh@355 -- # echo 2
00:08:10.878    11:29:36 version -- scripts/common.sh@366 -- # ver2[v]=2
00:08:10.878    11:29:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:10.878    11:29:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:10.878    11:29:36 version -- scripts/common.sh@368 -- # return 0
00:08:10.878    11:29:36 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:10.878    11:29:36 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:08:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.878  		--rc genhtml_branch_coverage=1
00:08:10.878  		--rc genhtml_function_coverage=1
00:08:10.878  		--rc genhtml_legend=1
00:08:10.878  		--rc geninfo_all_blocks=1
00:08:10.878  		--rc geninfo_unexecuted_blocks=1
00:08:10.878  		
00:08:10.878  		'
00:08:10.878    11:29:36 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:08:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.878  		--rc genhtml_branch_coverage=1
00:08:10.878  		--rc genhtml_function_coverage=1
00:08:10.878  		--rc genhtml_legend=1
00:08:10.878  		--rc geninfo_all_blocks=1
00:08:10.878  		--rc geninfo_unexecuted_blocks=1
00:08:10.878  		
00:08:10.878  		'
00:08:10.878    11:29:36 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:08:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.878  		--rc genhtml_branch_coverage=1
00:08:10.878  		--rc genhtml_function_coverage=1
00:08:10.878  		--rc genhtml_legend=1
00:08:10.878  		--rc geninfo_all_blocks=1
00:08:10.878  		--rc geninfo_unexecuted_blocks=1
00:08:10.878  		
00:08:10.878  		'
00:08:10.878    11:29:36 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:08:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:10.878  		--rc genhtml_branch_coverage=1
00:08:10.878  		--rc genhtml_function_coverage=1
00:08:10.878  		--rc genhtml_legend=1
00:08:10.878  		--rc geninfo_all_blocks=1
00:08:10.878  		--rc geninfo_unexecuted_blocks=1
00:08:10.878  		
00:08:10.878  		'
00:08:10.878    11:29:36 version -- app/version.sh@17 -- # get_header_version major
00:08:10.878    11:29:36 version -- app/version.sh@14 -- # cut -f2
00:08:10.878    11:29:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:10.878    11:29:36 version -- app/version.sh@14 -- # tr -d '"'
00:08:10.878   11:29:36 version -- app/version.sh@17 -- # major=24
00:08:10.878    11:29:36 version -- app/version.sh@18 -- # get_header_version minor
00:08:10.878    11:29:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:10.878    11:29:36 version -- app/version.sh@14 -- # cut -f2
00:08:10.878    11:29:36 version -- app/version.sh@14 -- # tr -d '"'
00:08:11.137   11:29:36 version -- app/version.sh@18 -- # minor=9
00:08:11.137    11:29:36 version -- app/version.sh@19 -- # get_header_version patch
00:08:11.137    11:29:36 version -- app/version.sh@14 -- # tr -d '"'
00:08:11.137    11:29:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:11.137    11:29:36 version -- app/version.sh@14 -- # cut -f2
00:08:11.137   11:29:36 version -- app/version.sh@19 -- # patch=1
00:08:11.137    11:29:36 version -- app/version.sh@20 -- # get_header_version suffix
00:08:11.137    11:29:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:11.137    11:29:36 version -- app/version.sh@14 -- # cut -f2
00:08:11.137    11:29:36 version -- app/version.sh@14 -- # tr -d '"'
00:08:11.137   11:29:36 version -- app/version.sh@20 -- # suffix=-pre
00:08:11.137   11:29:36 version -- app/version.sh@22 -- # version=24.9
00:08:11.137   11:29:36 version -- app/version.sh@25 -- # (( patch != 0 ))
00:08:11.137   11:29:36 version -- app/version.sh@25 -- # version=24.9.1
00:08:11.137   11:29:36 version -- app/version.sh@28 -- # version=24.9.1rc0
00:08:11.137   11:29:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:08:11.137    11:29:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:08:11.137   11:29:37 version -- app/version.sh@30 -- # py_version=24.9.1rc0
00:08:11.137   11:29:37 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]]
00:08:11.137  ************************************
00:08:11.137  END TEST version
00:08:11.137  ************************************
00:08:11.137  
00:08:11.137  real	0m0.312s
00:08:11.137  user	0m0.180s
00:08:11.137  sys	0m0.182s
00:08:11.137   11:29:37 version -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:11.137   11:29:37 version -- common/autotest_common.sh@10 -- # set +x
00:08:11.137   11:29:37  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:08:11.137   11:29:37  -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]]
00:08:11.137   11:29:37  -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh
00:08:11.137   11:29:37  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:08:11.137   11:29:37  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:11.138   11:29:37  -- common/autotest_common.sh@10 -- # set +x
00:08:11.138  ************************************
00:08:11.138  START TEST bdev_raid
00:08:11.138  ************************************
00:08:11.138   11:29:37 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh
00:08:11.138  * Looking for test storage...
00:08:11.138  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:08:11.138    11:29:37 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:08:11.397     11:29:37 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version
00:08:11.397     11:29:37 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:08:11.397    11:29:37 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@336 -- # IFS=.-:
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@337 -- # IFS=.-:
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@338 -- # local 'op=<'
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@344 -- # case "$op" in
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@345 -- # : 1
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@365 -- # decimal 1
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@353 -- # local d=1
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@355 -- # echo 1
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@366 -- # decimal 2
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@353 -- # local d=2
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:11.397     11:29:37 bdev_raid -- scripts/common.sh@355 -- # echo 2
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:11.397    11:29:37 bdev_raid -- scripts/common.sh@368 -- # return 0
00:08:11.397    11:29:37 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:11.397    11:29:37 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:08:11.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.397  		--rc genhtml_branch_coverage=1
00:08:11.397  		--rc genhtml_function_coverage=1
00:08:11.397  		--rc genhtml_legend=1
00:08:11.397  		--rc geninfo_all_blocks=1
00:08:11.397  		--rc geninfo_unexecuted_blocks=1
00:08:11.397  		
00:08:11.397  		'
00:08:11.397    11:29:37 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:08:11.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.397  		--rc genhtml_branch_coverage=1
00:08:11.397  		--rc genhtml_function_coverage=1
00:08:11.397  		--rc genhtml_legend=1
00:08:11.397  		--rc geninfo_all_blocks=1
00:08:11.397  		--rc geninfo_unexecuted_blocks=1
00:08:11.397  		
00:08:11.397  		'
00:08:11.397    11:29:37 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:08:11.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.397  		--rc genhtml_branch_coverage=1
00:08:11.397  		--rc genhtml_function_coverage=1
00:08:11.397  		--rc genhtml_legend=1
00:08:11.397  		--rc geninfo_all_blocks=1
00:08:11.397  		--rc geninfo_unexecuted_blocks=1
00:08:11.397  		
00:08:11.397  		'
00:08:11.397    11:29:37 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:08:11.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:11.397  		--rc genhtml_branch_coverage=1
00:08:11.397  		--rc genhtml_function_coverage=1
00:08:11.397  		--rc genhtml_legend=1
00:08:11.397  		--rc geninfo_all_blocks=1
00:08:11.397  		--rc geninfo_unexecuted_blocks=1
00:08:11.397  		
00:08:11.397  		'
00:08:11.397   11:29:37 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:08:11.397    11:29:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e
00:08:11.397   11:29:37 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd
00:08:11.397   11:29:37 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest
00:08:11.397   11:29:37 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT
00:08:11.397   11:29:37 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512
00:08:11.397   11:29:37 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test
00:08:11.397   11:29:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:08:11.397   11:29:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:11.397   11:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:11.397  ************************************
00:08:11.398  START TEST raid1_resize_data_offset_test
00:08:11.398  ************************************
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71812
00:08:11.398  Process raid pid: 71812
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71812'
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71812
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71812 ']'
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:11.398  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:11.398   11:29:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:11.398  [2024-12-16 11:29:37.400165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:11.398  [2024-12-16 11:29:37.400298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:11.656  [2024-12-16 11:29:37.541354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:11.656  [2024-12-16 11:29:37.586693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:11.656  [2024-12-16 11:29:37.628877] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:11.656  [2024-12-16 11:29:37.628919] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.225  malloc0
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.225  malloc1
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.225   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.485  null0
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.485  [2024-12-16 11:29:38.302479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed
00:08:12.485  [2024-12-16 11:29:38.304292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:08:12.485  [2024-12-16 11:29:38.304366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed
00:08:12.485  [2024-12-16 11:29:38.304579] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:12.485  [2024-12-16 11:29:38.304592] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512
00:08:12.485  [2024-12-16 11:29:38.304856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:08:12.485  [2024-12-16 11:29:38.305013] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:12.485  [2024-12-16 11:29:38.305033] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280
00:08:12.485  [2024-12-16 11:29:38.305174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset'
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 ))
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.485  [2024-12-16 11:29:38.362370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.485  malloc2
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.485  [2024-12-16 11:29:38.487727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:08:12.485  [2024-12-16 11:29:38.491988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.485  [2024-12-16 11:29:38.493947] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset'
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 ))
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71812
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71812 ']'
00:08:12.485   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71812
00:08:12.485    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname
00:08:12.745   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:12.745    11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71812
00:08:12.745   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:12.745   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:12.745  killing process with pid 71812
00:08:12.745   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71812'
00:08:12.745   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71812
00:08:12.745   11:29:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71812
00:08:12.745  [2024-12-16 11:29:38.588203] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:12.745  [2024-12-16 11:29:38.588917] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled
00:08:12.745  [2024-12-16 11:29:38.588980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:12.745  [2024-12-16 11:29:38.588999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2
00:08:12.745  [2024-12-16 11:29:38.594440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:12.745  [2024-12-16 11:29:38.594748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:12.745  [2024-12-16 11:29:38.594765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline
00:08:12.745  [2024-12-16 11:29:38.805462] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:13.004   11:29:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0
00:08:13.004  
00:08:13.004  real	0m1.729s
00:08:13.004  user	0m1.742s
00:08:13.004  sys	0m0.445s
00:08:13.004   11:29:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:13.004   11:29:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x
00:08:13.004  ************************************
00:08:13.004  END TEST raid1_resize_data_offset_test
00:08:13.004  ************************************
00:08:13.263   11:29:39 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0
00:08:13.263   11:29:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:08:13.263   11:29:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:13.263   11:29:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:13.263  ************************************
00:08:13.263  START TEST raid0_resize_superblock_test
00:08:13.263  ************************************
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71863
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71863'
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:13.263  Process raid pid: 71863
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71863
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71863 ']'
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:13.263  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:13.263   11:29:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:13.263  [2024-12-16 11:29:39.195864] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:13.263  [2024-12-16 11:29:39.195996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:13.522  [2024-12-16 11:29:39.351580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:13.522  [2024-12-16 11:29:39.396685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:13.522  [2024-12-16 11:29:39.439500] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:13.522  [2024-12-16 11:29:39.439551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.090  malloc0
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.090  [2024-12-16 11:29:40.151450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0
00:08:14.090  [2024-12-16 11:29:40.151518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:14.090  [2024-12-16 11:29:40.151561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:08:14.090  [2024-12-16 11:29:40.151576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:14.090  [2024-12-16 11:29:40.153953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:14.090  [2024-12-16 11:29:40.153990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0
00:08:14.090  pt0
00:08:14.090   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.350  3522ae3c-a522-417e-b98f-c17fcfc83d25
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.350  75ab37a8-dc80-42f9-a70a-79dd4e2234f0
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.350  fac0bb55-232d-4faa-b820-a11dda01f3b5
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.350  [2024-12-16 11:29:40.286194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 75ab37a8-dc80-42f9-a70a-79dd4e2234f0 is claimed
00:08:14.350  [2024-12-16 11:29:40.286306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev fac0bb55-232d-4faa-b820-a11dda01f3b5 is claimed
00:08:14.350  [2024-12-16 11:29:40.286423] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:14.350  [2024-12-16 11:29:40.286442] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512
00:08:14.350  [2024-12-16 11:29:40.286728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:14.350  [2024-12-16 11:29:40.286895] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:14.350  [2024-12-16 11:29:40.286913] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280
00:08:14.350  [2024-12-16 11:29:40.287068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks'
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 ))
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks'
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 ))
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks'
00:08:14.350   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.350    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.350  [2024-12-16 11:29:40.402235] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 ))
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.610  [2024-12-16 11:29:40.434079] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:14.610  [2024-12-16 11:29:40.434109] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '75ab37a8-dc80-42f9-a70a-79dd4e2234f0' was resized: old size 131072, new size 204800
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.610  [2024-12-16 11:29:40.445996] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:14.610  [2024-12-16 11:29:40.446023] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fac0bb55-232d-4faa-b820-a11dda01f3b5' was resized: old size 131072, new size 204800
00:08:14.610  [2024-12-16 11:29:40.446077] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks'
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 ))
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks'
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 ))
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks'
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.610  [2024-12-16 11:29:40.553944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:14.610    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 ))
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.610  [2024-12-16 11:29:40.597703] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0
00:08:14.610  [2024-12-16 11:29:40.597775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0
00:08:14.610  [2024-12-16 11:29:40.597795] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:14.610  [2024-12-16 11:29:40.597817] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1
00:08:14.610  [2024-12-16 11:29:40.597930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:14.610  [2024-12-16 11:29:40.597968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:14.610  [2024-12-16 11:29:40.597980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.610   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.610  [2024-12-16 11:29:40.605615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0
00:08:14.610  [2024-12-16 11:29:40.605690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:14.610  [2024-12-16 11:29:40.605710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:08:14.610  [2024-12-16 11:29:40.605723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:14.610  [2024-12-16 11:29:40.607883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:14.611  [2024-12-16 11:29:40.607920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0
00:08:14.611  [2024-12-16 11:29:40.609389] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 75ab37a8-dc80-42f9-a70a-79dd4e2234f0
00:08:14.611  [2024-12-16 11:29:40.609450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 75ab37a8-dc80-42f9-a70a-79dd4e2234f0 is claimed
00:08:14.611  [2024-12-16 11:29:40.609532] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fac0bb55-232d-4faa-b820-a11dda01f3b5
00:08:14.611  [2024-12-16 11:29:40.609577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev fac0bb55-232d-4faa-b820-a11dda01f3b5 is claimed
00:08:14.611  [2024-12-16 11:29:40.609661] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fac0bb55-232d-4faa-b820-a11dda01f3b5 (2) smaller than existing raid bdev Raid (3)
00:08:14.611  [2024-12-16 11:29:40.609697] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 75ab37a8-dc80-42f9-a70a-79dd4e2234f0: File exists
00:08:14.611  [2024-12-16 11:29:40.609734] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:08:14.611  [2024-12-16 11:29:40.609743] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512
00:08:14.611  [2024-12-16 11:29:40.609961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:08:14.611  [2024-12-16 11:29:40.610083] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:08:14.611  [2024-12-16 11:29:40.610106] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600
00:08:14.611  [2024-12-16 11:29:40.610255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:14.611  pt0
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:14.611    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:14.611    11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks'
00:08:14.611    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:14.611    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:14.611  [2024-12-16 11:29:40.629932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:14.611    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 ))
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71863
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71863 ']'
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71863
00:08:14.611    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname
00:08:14.611   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:14.870    11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71863
00:08:14.870  killing process with pid 71863
00:08:14.870   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:14.870   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:14.870   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71863'
00:08:14.870   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71863
00:08:14.870  [2024-12-16 11:29:40.693336] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:14.870   11:29:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71863
00:08:14.870  [2024-12-16 11:29:40.693421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:14.870  [2024-12-16 11:29:40.693466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:14.870  [2024-12-16 11:29:40.693475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline
00:08:14.870  [2024-12-16 11:29:40.852668] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:15.130   11:29:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0
00:08:15.130  
00:08:15.130  real	0m1.976s
00:08:15.130  user	0m2.279s
00:08:15.130  sys	0m0.460s
00:08:15.130   11:29:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:15.130   11:29:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:15.130  ************************************
00:08:15.130  END TEST raid0_resize_superblock_test
00:08:15.130  ************************************
00:08:15.130   11:29:41 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1
00:08:15.130   11:29:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:08:15.130   11:29:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:15.130   11:29:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:15.130  ************************************
00:08:15.130  START TEST raid1_resize_superblock_test
00:08:15.130  ************************************
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71934
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:15.130  Process raid pid: 71934
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71934'
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71934
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71934 ']'
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:15.130  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:15.130   11:29:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:15.390  [2024-12-16 11:29:41.235583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:15.390  [2024-12-16 11:29:41.235706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:15.390  [2024-12-16 11:29:41.397833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:15.390  [2024-12-16 11:29:41.443502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:15.650  [2024-12-16 11:29:41.485741] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:15.650  [2024-12-16 11:29:41.485784] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.217  malloc0
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.217  [2024-12-16 11:29:42.190753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0
00:08:16.217  [2024-12-16 11:29:42.190810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:16.217  [2024-12-16 11:29:42.190834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:08:16.217  [2024-12-16 11:29:42.190851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:16.217  [2024-12-16 11:29:42.192994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:16.217  [2024-12-16 11:29:42.193028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0
00:08:16.217  pt0
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.217   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477  5c71255c-e7a1-4d23-b2a8-4a3b9aca6145
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477  9089d86a-b9ce-4080-ba94-0a635ac54a71
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477  f254f6ca-e0e2-4c7e-b790-eca43ba4e3df
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477  [2024-12-16 11:29:42.327589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9089d86a-b9ce-4080-ba94-0a635ac54a71 is claimed
00:08:16.477  [2024-12-16 11:29:42.327677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f254f6ca-e0e2-4c7e-b790-eca43ba4e3df is claimed
00:08:16.477  [2024-12-16 11:29:42.327795] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:16.477  [2024-12-16 11:29:42.327833] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512
00:08:16.477  [2024-12-16 11:29:42.328127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:16.477  [2024-12-16 11:29:42.328308] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:16.477  [2024-12-16 11:29:42.328328] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280
00:08:16.477  [2024-12-16 11:29:42.328485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks'
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 ))
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks'
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 ))
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks'
00:08:16.477  [2024-12-16 11:29:42.443714] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 ))
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477  [2024-12-16 11:29:42.483601] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:16.477  [2024-12-16 11:29:42.483635] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9089d86a-b9ce-4080-ba94-0a635ac54a71' was resized: old size 131072, new size 204800
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477  [2024-12-16 11:29:42.495463] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:16.477  [2024-12-16 11:29:42.495492] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f254f6ca-e0e2-4c7e-b790-eca43ba4e3df' was resized: old size 131072, new size 204800
00:08:16.477  [2024-12-16 11:29:42.495514] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608
00:08:16.477   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks'
00:08:16.477    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 ))
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks'
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 ))
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks'
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.737  [2024-12-16 11:29:42.603395] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:16.737    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 ))
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.737  [2024-12-16 11:29:42.651119] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0
00:08:16.737  [2024-12-16 11:29:42.651184] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0
00:08:16.737  [2024-12-16 11:29:42.651210] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1
00:08:16.737  [2024-12-16 11:29:42.651390] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:16.737  [2024-12-16 11:29:42.651577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:16.737  [2024-12-16 11:29:42.651637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:16.737  [2024-12-16 11:29:42.651657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.737   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.737  [2024-12-16 11:29:42.663035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0
00:08:16.737  [2024-12-16 11:29:42.663086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:16.737  [2024-12-16 11:29:42.663106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:08:16.737  [2024-12-16 11:29:42.663118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:16.737  [2024-12-16 11:29:42.665230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:16.737  [2024-12-16 11:29:42.665265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0
00:08:16.737  [2024-12-16 11:29:42.666737] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9089d86a-b9ce-4080-ba94-0a635ac54a71
00:08:16.737  [2024-12-16 11:29:42.666810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9089d86a-b9ce-4080-ba94-0a635ac54a71 is claimed
00:08:16.737  [2024-12-16 11:29:42.666887] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f254f6ca-e0e2-4c7e-b790-eca43ba4e3df
00:08:16.738  [2024-12-16 11:29:42.666924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f254f6ca-e0e2-4c7e-b790-eca43ba4e3df is claimed
00:08:16.738  [2024-12-16 11:29:42.667034] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f254f6ca-e0e2-4c7e-b790-eca43ba4e3df (2) smaller than existing raid bdev Raid (3)
00:08:16.738  [2024-12-16 11:29:42.667061] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9089d86a-b9ce-4080-ba94-0a635ac54a71: File exists
00:08:16.738  [2024-12-16 11:29:42.667110] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:08:16.738  [2024-12-16 11:29:42.667119] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:08:16.738  [2024-12-16 11:29:42.667349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:08:16.738  [2024-12-16 11:29:42.667479] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:08:16.738  [2024-12-16 11:29:42.667491] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600
00:08:16.738  [2024-12-16 11:29:42.667644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:16.738  pt0
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:16.738    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:16.738    11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks'
00:08:16.738    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.738    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:16.738  [2024-12-16 11:29:42.691321] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:16.738    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 ))
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71934
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71934 ']'
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71934
00:08:16.738    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:16.738    11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71934
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:16.738  killing process with pid 71934
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71934'
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71934
00:08:16.738  [2024-12-16 11:29:42.775090] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:16.738  [2024-12-16 11:29:42.775180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:16.738  [2024-12-16 11:29:42.775234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:16.738  [2024-12-16 11:29:42.775247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline
00:08:16.738   11:29:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71934
00:08:16.998  [2024-12-16 11:29:42.934771] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:17.259   11:29:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0
00:08:17.259  
00:08:17.259  real	0m2.025s
00:08:17.259  user	0m2.295s
00:08:17.259  sys	0m0.520s
00:08:17.259   11:29:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:17.259   11:29:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:17.259  ************************************
00:08:17.259  END TEST raid1_resize_superblock_test
00:08:17.259  ************************************
00:08:17.259    11:29:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s
00:08:17.259   11:29:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']'
00:08:17.259   11:29:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd
00:08:17.259   11:29:43 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true
00:08:17.259   11:29:43 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd
00:08:17.259   11:29:43 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0
00:08:17.259   11:29:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:08:17.259   11:29:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:17.259   11:29:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:17.259  ************************************
00:08:17.259  START TEST raid_function_test_raid0
00:08:17.259  ************************************
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=72009
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72009'
00:08:17.259  Process raid pid: 72009
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 72009
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 72009 ']'
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:17.259  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:17.259   11:29:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x
00:08:17.521  [2024-12-16 11:29:43.352344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:17.521  [2024-12-16 11:29:43.352467] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:17.521  [2024-12-16 11:29:43.494369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:17.521  [2024-12-16 11:29:43.538912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:17.521  [2024-12-16 11:29:43.581525] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:17.521  [2024-12-16 11:29:43.581587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x
00:08:18.460  Base_1
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x
00:08:18.460  Base_2
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x
00:08:18.460  [2024-12-16 11:29:44.238202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:08:18.460  [2024-12-16 11:29:44.240176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:08:18.460  [2024-12-16 11:29:44.240249] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:18.460  [2024-12-16 11:29:44.240261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:08:18.460  [2024-12-16 11:29:44.240545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:18.460  [2024-12-16 11:29:44.240676] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:18.460  [2024-12-16 11:29:44.240690] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280
00:08:18.460  [2024-12-16 11:29:44.240822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:18.460    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online
00:08:18.460    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)'
00:08:18.460    11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:18.460    11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x
00:08:18.460    11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']'
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid')
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0
00:08:18.460  [2024-12-16 11:29:44.481821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:08:18.460  /dev/nbd0
00:08:18.460    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:08:18.460   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:18.719  1+0 records in
00:08:18.719  1+0 records out
00:08:18.719  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296775 s, 13.8 MB/s
00:08:18.719    11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:18.719   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:08:18.719    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock
00:08:18.719    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock
00:08:18.719     11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks
00:08:18.719    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:18.719    {
00:08:18.719      "nbd_device": "/dev/nbd0",
00:08:18.719      "bdev_name": "raid"
00:08:18.719    }
00:08:18.719  ]'
00:08:18.719     11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[
00:08:18.719    {
00:08:18.719      "nbd_device": "/dev/nbd0",
00:08:18.719      "bdev_name": "raid"
00:08:18.719    }
00:08:18.719  ]'
00:08:18.719     11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:18.977    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:08:18.978     11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:08:18.978     11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:18.978    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1
00:08:18.978    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']'
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize
00:08:18.978    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0
00:08:18.978    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC
00:08:18.978    11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321')
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456')
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096
00:08:18.978  4096+0 records in
00:08:18.978  4096+0 records out
00:08:18.978  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0289226 s, 72.5 MB/s
00:08:18.978   11:29:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct
00:08:19.236  4096+0 records in
00:08:19.236  4096+0 records out
00:08:19.236  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.208498 s, 10.1 MB/s
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc
00:08:19.236  128+0 records in
00:08:19.236  128+0 records out
00:08:19.236  65536 bytes (66 kB, 64 KiB) copied, 0.00119987 s, 54.6 MB/s
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc
00:08:19.236  2035+0 records in
00:08:19.236  2035+0 records out
00:08:19.236  1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0153611 s, 67.8 MB/s
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc
00:08:19.236  456+0 records in
00:08:19.236  456+0 records out
00:08:19.236  233472 bytes (233 kB, 228 KiB) copied, 0.00226947 s, 103 MB/s
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:19.236   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:08:19.495    11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:19.495  [2024-12-16 11:29:45.429379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:19.495   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:19.495   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:19.495   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:19.495   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:19.495   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:19.495   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break
00:08:19.495   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0
00:08:19.495    11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock
00:08:19.495    11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock
00:08:19.495     11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks
00:08:19.754    11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:19.754     11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:19.754     11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:19.754    11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:19.754     11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:19.754     11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo ''
00:08:19.754     11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true
00:08:19.754    11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0
00:08:19.754    11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']'
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 72009
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 72009 ']'
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 72009
00:08:19.754    11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:19.754    11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72009
00:08:19.754  killing process with pid 72009
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72009'
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 72009
00:08:19.754  [2024-12-16 11:29:45.754939] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:19.754  [2024-12-16 11:29:45.755073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:19.754   11:29:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 72009
00:08:19.754  [2024-12-16 11:29:45.755137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:19.754  [2024-12-16 11:29:45.755150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline
00:08:19.754  [2024-12-16 11:29:45.779387] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:20.013  ************************************
00:08:20.013  END TEST raid_function_test_raid0
00:08:20.013  ************************************
00:08:20.013   11:29:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0
00:08:20.013  
00:08:20.013  real	0m2.756s
00:08:20.013  user	0m3.386s
00:08:20.013  sys	0m0.952s
00:08:20.013   11:29:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:20.013   11:29:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x
00:08:20.275   11:29:46 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat
00:08:20.275   11:29:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:08:20.275   11:29:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:20.275   11:29:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:20.275  ************************************
00:08:20.275  START TEST raid_function_test_concat
00:08:20.275  ************************************
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72123
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72123'
00:08:20.275  Process raid pid: 72123
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72123
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72123 ']'
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:20.275  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:20.275   11:29:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x
00:08:20.275  [2024-12-16 11:29:46.178028] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:20.275  [2024-12-16 11:29:46.178186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:20.275  [2024-12-16 11:29:46.339406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:20.535  [2024-12-16 11:29:46.392212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:20.535  [2024-12-16 11:29:46.435861] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:20.535  [2024-12-16 11:29:46.435905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x
00:08:21.103  Base_1
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x
00:08:21.103  Base_2
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x
00:08:21.103  [2024-12-16 11:29:47.098205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:08:21.103  [2024-12-16 11:29:47.100420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:08:21.103  [2024-12-16 11:29:47.100509] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:21.103  [2024-12-16 11:29:47.100523] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:08:21.103  [2024-12-16 11:29:47.100871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:21.103  [2024-12-16 11:29:47.101038] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:21.103  [2024-12-16 11:29:47.101054] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280
00:08:21.103  [2024-12-16 11:29:47.101245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:21.103    11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online
00:08:21.103    11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)'
00:08:21.103    11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:21.103    11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x
00:08:21.103    11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:21.103   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']'
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid')
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:08:21.104   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0
00:08:21.364  [2024-12-16 11:29:47.365741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:08:21.364  /dev/nbd0
00:08:21.364    11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:21.364  1+0 records in
00:08:21.364  1+0 records out
00:08:21.364  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428293 s, 9.6 MB/s
00:08:21.364    11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:21.364   11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:08:21.364    11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock
00:08:21.364    11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock
00:08:21.623     11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks
00:08:21.624    11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:21.624    {
00:08:21.624      "nbd_device": "/dev/nbd0",
00:08:21.624      "bdev_name": "raid"
00:08:21.624    }
00:08:21.624  ]'
00:08:21.624     11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[
00:08:21.624    {
00:08:21.624      "nbd_device": "/dev/nbd0",
00:08:21.624      "bdev_name": "raid"
00:08:21.624    }
00:08:21.624  ]'
00:08:21.624     11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:21.882    11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:08:21.882     11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:08:21.882     11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:21.882    11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1
00:08:21.882    11:29:47 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1
00:08:21.882   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1
00:08:21.882   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']'
00:08:21.882   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0
00:08:21.882   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard
00:08:21.882   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0
00:08:21.882   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize
00:08:21.882    11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0
00:08:21.882    11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC
00:08:21.882    11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321')
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456')
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096
00:08:21.883  4096+0 records in
00:08:21.883  4096+0 records out
00:08:21.883  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0315361 s, 66.5 MB/s
00:08:21.883   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct
00:08:22.142  4096+0 records in
00:08:22.142  4096+0 records out
00:08:22.142  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.203429 s, 10.3 MB/s
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 ))
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc
00:08:22.142  128+0 records in
00:08:22.142  128+0 records out
00:08:22.142  65536 bytes (66 kB, 64 KiB) copied, 0.00112253 s, 58.4 MB/s
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0
00:08:22.142   11:29:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ ))
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc
00:08:22.142  2035+0 records in
00:08:22.142  2035+0 records out
00:08:22.142  1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0150525 s, 69.2 MB/s
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ ))
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc
00:08:22.142  456+0 records in
00:08:22.142  456+0 records out
00:08:22.142  233472 bytes (233 kB, 228 KiB) copied, 0.00358783 s, 65.1 MB/s
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ ))
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 ))
00:08:22.142   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0
00:08:22.143   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:08:22.143   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:08:22.143   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:08:22.143   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:22.143   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i
00:08:22.143   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:22.143   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:08:22.402    11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:22.402  [2024-12-16 11:29:48.319293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:22.402   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:22.402   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:22.402   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:22.402   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:22.402   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:22.402   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break
00:08:22.402   11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0
00:08:22.402    11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock
00:08:22.402    11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock
00:08:22.402     11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks
00:08:22.661    11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:22.661     11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:22.661     11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:22.661    11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:22.661     11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo ''
00:08:22.661     11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:22.661     11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true
00:08:22.661    11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0
00:08:22.661    11:29:48 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']'
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72123
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72123 ']'
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72123
00:08:22.661    11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:22.661    11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72123
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:22.661  killing process with pid 72123
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72123'
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72123
00:08:22.661  [2024-12-16 11:29:48.659548] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:22.661  [2024-12-16 11:29:48.659683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:22.661   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72123
00:08:22.661  [2024-12-16 11:29:48.659746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:22.661  [2024-12-16 11:29:48.659767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline
00:08:22.661  [2024-12-16 11:29:48.684488] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:22.919   11:29:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0
00:08:22.919  
00:08:22.919  real	0m2.845s
00:08:22.919  user	0m3.571s
00:08:22.919  sys	0m0.942s
00:08:22.919   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:22.919   11:29:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x
00:08:22.919  ************************************
00:08:22.919  END TEST raid_function_test_concat
00:08:22.919  ************************************
00:08:23.179   11:29:48 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0
00:08:23.179   11:29:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:08:23.179   11:29:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:23.179   11:29:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:23.179  ************************************
00:08:23.179  START TEST raid0_resize_test
00:08:23.179  ************************************
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72240
00:08:23.179  Process raid pid: 72240
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72240'
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72240
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72240 ']'
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:23.179  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:23.179   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:23.179  [2024-12-16 11:29:49.086803] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:23.179  [2024-12-16 11:29:49.086936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:23.438  [2024-12-16 11:29:49.251775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:23.438  [2024-12-16 11:29:49.307969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:23.438  [2024-12-16 11:29:49.351766] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:23.438  [2024-12-16 11:29:49.351812] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.007  Base_1
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.007  Base_2
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']'
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.007  [2024-12-16 11:29:49.997222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:08:24.007  [2024-12-16 11:29:49.999365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:08:24.007  [2024-12-16 11:29:49.999436] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:24.007  [2024-12-16 11:29:49.999458] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:08:24.007  [2024-12-16 11:29:49.999799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:08:24.007  [2024-12-16 11:29:49.999921] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:24.007  [2024-12-16 11:29:49.999934] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280
00:08:24.007  [2024-12-16 11:29:50.000067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:24.007   11:29:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.007  [2024-12-16 11:29:50.005170] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:24.007  [2024-12-16 11:29:50.005204] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072
00:08:24.007  true
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:24.007    11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:24.007    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:24.007    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.007    11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks'
00:08:24.007  [2024-12-16 11:29:50.017393] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:24.007    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']'
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']'
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.007  [2024-12-16 11:29:50.065120] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:24.007  [2024-12-16 11:29:50.065155] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072
00:08:24.007  [2024-12-16 11:29:50.065188] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144
00:08:24.007  true
00:08:24.007   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:24.266    11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:24.266    11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks'
00:08:24.266    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:24.266    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.266  [2024-12-16 11:29:50.077288] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:24.266    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']'
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']'
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72240
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72240 ']'
00:08:24.266   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72240
00:08:24.267    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname
00:08:24.267   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:24.267    11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72240
00:08:24.267   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:24.267  killing process with pid 72240
00:08:24.267   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:24.267   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72240'
00:08:24.267   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72240
00:08:24.267   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72240
00:08:24.267  [2024-12-16 11:29:50.161286] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:24.267  [2024-12-16 11:29:50.161406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:24.267  [2024-12-16 11:29:50.161474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:24.267  [2024-12-16 11:29:50.161489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline
00:08:24.267  [2024-12-16 11:29:50.163155] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:24.525   11:29:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0
00:08:24.525  
00:08:24.525  real	0m1.405s
00:08:24.525  user	0m1.597s
00:08:24.525  sys	0m0.336s
00:08:24.525   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:24.525   11:29:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.525  ************************************
00:08:24.525  END TEST raid0_resize_test
00:08:24.525  ************************************
00:08:24.525   11:29:50 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1
00:08:24.525   11:29:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:08:24.525   11:29:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:24.525   11:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:24.525  ************************************
00:08:24.525  START TEST raid1_resize_test
00:08:24.525  ************************************
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size
00:08:24.525   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72291
00:08:24.525  Process raid pid: 72291
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72291'
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72291
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72291 ']'
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:24.526  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:24.526   11:29:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:24.526  [2024-12-16 11:29:50.558847] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:24.526  [2024-12-16 11:29:50.559006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:24.785  [2024-12-16 11:29:50.723405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:24.785  [2024-12-16 11:29:50.779212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:24.785  [2024-12-16 11:29:50.824350] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:24.785  [2024-12-16 11:29:50.824393] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.730  Base_1
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.730  Base_2
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.730  [2024-12-16 11:29:51.482899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:08:25.730  [2024-12-16 11:29:51.485056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:08:25.730  [2024-12-16 11:29:51.485128] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:25.730  [2024-12-16 11:29:51.485141] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:08:25.730  [2024-12-16 11:29:51.485467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00
00:08:25.730  [2024-12-16 11:29:51.485643] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:25.730  [2024-12-16 11:29:51.485675] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280
00:08:25.730  [2024-12-16 11:29:51.485815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.730  [2024-12-16 11:29:51.490829] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:25.730  [2024-12-16 11:29:51.490860] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072
00:08:25.730  true
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks'
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.730  [2024-12-16 11:29:51.507041] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.730  [2024-12-16 11:29:51.550777] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:08:25.730  [2024-12-16 11:29:51.550811] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072
00:08:25.730  [2024-12-16 11:29:51.550848] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072
00:08:25.730  true
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks'
00:08:25.730  [2024-12-16 11:29:51.562944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72291
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72291 ']'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72291
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:25.730    11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72291
00:08:25.730  killing process with pid 72291
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72291'
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72291
00:08:25.730   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72291
00:08:25.730  [2024-12-16 11:29:51.653232] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:25.730  [2024-12-16 11:29:51.653349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:25.730  [2024-12-16 11:29:51.653888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:25.730  [2024-12-16 11:29:51.653916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline
00:08:25.730  [2024-12-16 11:29:51.655162] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:25.989   11:29:51 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0
00:08:25.989  
00:08:25.989  real	0m1.443s
00:08:25.989  user	0m1.658s
00:08:25.989  sys	0m0.333s
00:08:25.989   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:25.989   11:29:51 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x
00:08:25.989  ************************************
00:08:25.989  END TEST raid1_resize_test
00:08:25.989  ************************************
00:08:25.989   11:29:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4}
00:08:25.989   11:29:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:08:25.989   11:29:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false
00:08:25.989   11:29:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:08:25.989   11:29:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:25.989   11:29:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:25.989  ************************************
00:08:25.989  START TEST raid_state_function_test
00:08:25.989  ************************************
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:25.989    11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']'
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72342
00:08:25.989  Process raid pid: 72342
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72342'
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72342
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72342 ']'
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:25.989  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:25.989   11:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:26.246  [2024-12-16 11:29:52.066905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:26.246  [2024-12-16 11:29:52.067042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:26.246  [2024-12-16 11:29:52.228846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:26.246  [2024-12-16 11:29:52.283197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:26.505  [2024-12-16 11:29:52.328792] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:26.505  [2024-12-16 11:29:52.328857] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.074  [2024-12-16 11:29:53.028226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:27.074  [2024-12-16 11:29:53.028285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:27.074  [2024-12-16 11:29:53.028299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:27.074  [2024-12-16 11:29:53.028311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:27.074    11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:27.074    11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.074    11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.074    11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:27.074    11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:27.074    "name": "Existed_Raid",
00:08:27.074    "uuid": "00000000-0000-0000-0000-000000000000",
00:08:27.074    "strip_size_kb": 64,
00:08:27.074    "state": "configuring",
00:08:27.074    "raid_level": "raid0",
00:08:27.074    "superblock": false,
00:08:27.074    "num_base_bdevs": 2,
00:08:27.074    "num_base_bdevs_discovered": 0,
00:08:27.074    "num_base_bdevs_operational": 2,
00:08:27.074    "base_bdevs_list": [
00:08:27.074      {
00:08:27.074        "name": "BaseBdev1",
00:08:27.074        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:27.074        "is_configured": false,
00:08:27.074        "data_offset": 0,
00:08:27.074        "data_size": 0
00:08:27.074      },
00:08:27.074      {
00:08:27.074        "name": "BaseBdev2",
00:08:27.074        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:27.074        "is_configured": false,
00:08:27.074        "data_offset": 0,
00:08:27.074        "data_size": 0
00:08:27.074      }
00:08:27.074    ]
00:08:27.074  }'
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:27.074   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.643   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:27.643   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.643   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.643  [2024-12-16 11:29:53.531661] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:27.643  [2024-12-16 11:29:53.531730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:08:27.643   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.643   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:27.643   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.643   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.643  [2024-12-16 11:29:53.539692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:27.643  [2024-12-16 11:29:53.539755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:27.643  [2024-12-16 11:29:53.539768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:27.644  [2024-12-16 11:29:53.539780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.644  [2024-12-16 11:29:53.557450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:27.644  BaseBdev1
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.644  [
00:08:27.644  {
00:08:27.644  "name": "BaseBdev1",
00:08:27.644  "aliases": [
00:08:27.644  "b3960ffd-e5b7-4bfb-b0ac-a20c7c5a3ad6"
00:08:27.644  ],
00:08:27.644  "product_name": "Malloc disk",
00:08:27.644  "block_size": 512,
00:08:27.644  "num_blocks": 65536,
00:08:27.644  "uuid": "b3960ffd-e5b7-4bfb-b0ac-a20c7c5a3ad6",
00:08:27.644  "assigned_rate_limits": {
00:08:27.644  "rw_ios_per_sec": 0,
00:08:27.644  "rw_mbytes_per_sec": 0,
00:08:27.644  "r_mbytes_per_sec": 0,
00:08:27.644  "w_mbytes_per_sec": 0
00:08:27.644  },
00:08:27.644  "claimed": true,
00:08:27.644  "claim_type": "exclusive_write",
00:08:27.644  "zoned": false,
00:08:27.644  "supported_io_types": {
00:08:27.644  "read": true,
00:08:27.644  "write": true,
00:08:27.644  "unmap": true,
00:08:27.644  "flush": true,
00:08:27.644  "reset": true,
00:08:27.644  "nvme_admin": false,
00:08:27.644  "nvme_io": false,
00:08:27.644  "nvme_io_md": false,
00:08:27.644  "write_zeroes": true,
00:08:27.644  "zcopy": true,
00:08:27.644  "get_zone_info": false,
00:08:27.644  "zone_management": false,
00:08:27.644  "zone_append": false,
00:08:27.644  "compare": false,
00:08:27.644  "compare_and_write": false,
00:08:27.644  "abort": true,
00:08:27.644  "seek_hole": false,
00:08:27.644  "seek_data": false,
00:08:27.644  "copy": true,
00:08:27.644  "nvme_iov_md": false
00:08:27.644  },
00:08:27.644  "memory_domains": [
00:08:27.644  {
00:08:27.644  "dma_device_id": "system",
00:08:27.644  "dma_device_type": 1
00:08:27.644  },
00:08:27.644  {
00:08:27.644  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:27.644  "dma_device_type": 2
00:08:27.644  }
00:08:27.644  ],
00:08:27.644  "driver_specific": {}
00:08:27.644  }
00:08:27.644  ]
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:27.644    11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:27.644    11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:27.644    11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:27.644    11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:27.644    11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:27.644    "name": "Existed_Raid",
00:08:27.644    "uuid": "00000000-0000-0000-0000-000000000000",
00:08:27.644    "strip_size_kb": 64,
00:08:27.644    "state": "configuring",
00:08:27.644    "raid_level": "raid0",
00:08:27.644    "superblock": false,
00:08:27.644    "num_base_bdevs": 2,
00:08:27.644    "num_base_bdevs_discovered": 1,
00:08:27.644    "num_base_bdevs_operational": 2,
00:08:27.644    "base_bdevs_list": [
00:08:27.644      {
00:08:27.644        "name": "BaseBdev1",
00:08:27.644        "uuid": "b3960ffd-e5b7-4bfb-b0ac-a20c7c5a3ad6",
00:08:27.644        "is_configured": true,
00:08:27.644        "data_offset": 0,
00:08:27.644        "data_size": 65536
00:08:27.644      },
00:08:27.644      {
00:08:27.644        "name": "BaseBdev2",
00:08:27.644        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:27.644        "is_configured": false,
00:08:27.644        "data_offset": 0,
00:08:27.644        "data_size": 0
00:08:27.644      }
00:08:27.644    ]
00:08:27.644  }'
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:27.644   11:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.213  [2024-12-16 11:29:54.032707] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:28.213  [2024-12-16 11:29:54.032773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.213  [2024-12-16 11:29:54.040725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:28.213  [2024-12-16 11:29:54.042923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:28.213  [2024-12-16 11:29:54.042972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:28.213    11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:28.213    11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:28.213    11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:28.213    11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.213    11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:28.213    "name": "Existed_Raid",
00:08:28.213    "uuid": "00000000-0000-0000-0000-000000000000",
00:08:28.213    "strip_size_kb": 64,
00:08:28.213    "state": "configuring",
00:08:28.213    "raid_level": "raid0",
00:08:28.213    "superblock": false,
00:08:28.213    "num_base_bdevs": 2,
00:08:28.213    "num_base_bdevs_discovered": 1,
00:08:28.213    "num_base_bdevs_operational": 2,
00:08:28.213    "base_bdevs_list": [
00:08:28.213      {
00:08:28.213        "name": "BaseBdev1",
00:08:28.213        "uuid": "b3960ffd-e5b7-4bfb-b0ac-a20c7c5a3ad6",
00:08:28.213        "is_configured": true,
00:08:28.213        "data_offset": 0,
00:08:28.213        "data_size": 65536
00:08:28.213      },
00:08:28.213      {
00:08:28.213        "name": "BaseBdev2",
00:08:28.213        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:28.213        "is_configured": false,
00:08:28.213        "data_offset": 0,
00:08:28.213        "data_size": 0
00:08:28.213      }
00:08:28.213    ]
00:08:28.213  }'
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:28.213   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.472   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:08:28.472   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:28.472   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.734  [2024-12-16 11:29:54.546839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:08:28.734  [2024-12-16 11:29:54.546917] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:28.734  [2024-12-16 11:29:54.546935] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:08:28.734  [2024-12-16 11:29:54.547459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:28.734  [2024-12-16 11:29:54.547782] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:28.734  [2024-12-16 11:29:54.547828] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:08:28.734  BaseBdev2
00:08:28.734  [2024-12-16 11:29:54.548215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.734  [
00:08:28.734  {
00:08:28.734  "name": "BaseBdev2",
00:08:28.734  "aliases": [
00:08:28.734  "5206330b-673b-4fa6-9d4c-cb7e6c1e80d1"
00:08:28.734  ],
00:08:28.734  "product_name": "Malloc disk",
00:08:28.734  "block_size": 512,
00:08:28.734  "num_blocks": 65536,
00:08:28.734  "uuid": "5206330b-673b-4fa6-9d4c-cb7e6c1e80d1",
00:08:28.734  "assigned_rate_limits": {
00:08:28.734  "rw_ios_per_sec": 0,
00:08:28.734  "rw_mbytes_per_sec": 0,
00:08:28.734  "r_mbytes_per_sec": 0,
00:08:28.734  "w_mbytes_per_sec": 0
00:08:28.734  },
00:08:28.734  "claimed": true,
00:08:28.734  "claim_type": "exclusive_write",
00:08:28.734  "zoned": false,
00:08:28.734  "supported_io_types": {
00:08:28.734  "read": true,
00:08:28.734  "write": true,
00:08:28.734  "unmap": true,
00:08:28.734  "flush": true,
00:08:28.734  "reset": true,
00:08:28.734  "nvme_admin": false,
00:08:28.734  "nvme_io": false,
00:08:28.734  "nvme_io_md": false,
00:08:28.734  "write_zeroes": true,
00:08:28.734  "zcopy": true,
00:08:28.734  "get_zone_info": false,
00:08:28.734  "zone_management": false,
00:08:28.734  "zone_append": false,
00:08:28.734  "compare": false,
00:08:28.734  "compare_and_write": false,
00:08:28.734  "abort": true,
00:08:28.734  "seek_hole": false,
00:08:28.734  "seek_data": false,
00:08:28.734  "copy": true,
00:08:28.734  "nvme_iov_md": false
00:08:28.734  },
00:08:28.734  "memory_domains": [
00:08:28.734  {
00:08:28.734  "dma_device_id": "system",
00:08:28.734  "dma_device_type": 1
00:08:28.734  },
00:08:28.734  {
00:08:28.734  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:28.734  "dma_device_type": 2
00:08:28.734  }
00:08:28.734  ],
00:08:28.734  "driver_specific": {}
00:08:28.734  }
00:08:28.734  ]
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:28.734    11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:28.734    11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:28.734    11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:28.734    11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:28.734    11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:28.734    "name": "Existed_Raid",
00:08:28.734    "uuid": "cd264a26-a2d3-46d9-99ab-6842da1dc1d3",
00:08:28.734    "strip_size_kb": 64,
00:08:28.734    "state": "online",
00:08:28.734    "raid_level": "raid0",
00:08:28.734    "superblock": false,
00:08:28.734    "num_base_bdevs": 2,
00:08:28.734    "num_base_bdevs_discovered": 2,
00:08:28.734    "num_base_bdevs_operational": 2,
00:08:28.734    "base_bdevs_list": [
00:08:28.734      {
00:08:28.734        "name": "BaseBdev1",
00:08:28.734        "uuid": "b3960ffd-e5b7-4bfb-b0ac-a20c7c5a3ad6",
00:08:28.734        "is_configured": true,
00:08:28.734        "data_offset": 0,
00:08:28.734        "data_size": 65536
00:08:28.734      },
00:08:28.734      {
00:08:28.734        "name": "BaseBdev2",
00:08:28.734        "uuid": "5206330b-673b-4fa6-9d4c-cb7e6c1e80d1",
00:08:28.734        "is_configured": true,
00:08:28.734        "data_offset": 0,
00:08:28.734        "data_size": 65536
00:08:28.734      }
00:08:28.734    ]
00:08:28.734  }'
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:28.734   11:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.333   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:08:29.333   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:08:29.333   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:29.333   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:29.333   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:08:29.333   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:29.333    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:08:29.333    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.333    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.333    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:29.333  [2024-12-16 11:29:55.078360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:29.333    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.333   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:29.333    "name": "Existed_Raid",
00:08:29.334    "aliases": [
00:08:29.334      "cd264a26-a2d3-46d9-99ab-6842da1dc1d3"
00:08:29.334    ],
00:08:29.334    "product_name": "Raid Volume",
00:08:29.334    "block_size": 512,
00:08:29.334    "num_blocks": 131072,
00:08:29.334    "uuid": "cd264a26-a2d3-46d9-99ab-6842da1dc1d3",
00:08:29.334    "assigned_rate_limits": {
00:08:29.334      "rw_ios_per_sec": 0,
00:08:29.334      "rw_mbytes_per_sec": 0,
00:08:29.334      "r_mbytes_per_sec": 0,
00:08:29.334      "w_mbytes_per_sec": 0
00:08:29.334    },
00:08:29.334    "claimed": false,
00:08:29.334    "zoned": false,
00:08:29.334    "supported_io_types": {
00:08:29.334      "read": true,
00:08:29.334      "write": true,
00:08:29.334      "unmap": true,
00:08:29.334      "flush": true,
00:08:29.334      "reset": true,
00:08:29.334      "nvme_admin": false,
00:08:29.334      "nvme_io": false,
00:08:29.334      "nvme_io_md": false,
00:08:29.334      "write_zeroes": true,
00:08:29.334      "zcopy": false,
00:08:29.334      "get_zone_info": false,
00:08:29.334      "zone_management": false,
00:08:29.334      "zone_append": false,
00:08:29.334      "compare": false,
00:08:29.334      "compare_and_write": false,
00:08:29.334      "abort": false,
00:08:29.334      "seek_hole": false,
00:08:29.334      "seek_data": false,
00:08:29.334      "copy": false,
00:08:29.334      "nvme_iov_md": false
00:08:29.334    },
00:08:29.334    "memory_domains": [
00:08:29.334      {
00:08:29.334        "dma_device_id": "system",
00:08:29.334        "dma_device_type": 1
00:08:29.334      },
00:08:29.334      {
00:08:29.334        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:29.334        "dma_device_type": 2
00:08:29.334      },
00:08:29.334      {
00:08:29.334        "dma_device_id": "system",
00:08:29.334        "dma_device_type": 1
00:08:29.334      },
00:08:29.334      {
00:08:29.334        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:29.334        "dma_device_type": 2
00:08:29.334      }
00:08:29.334    ],
00:08:29.334    "driver_specific": {
00:08:29.334      "raid": {
00:08:29.334        "uuid": "cd264a26-a2d3-46d9-99ab-6842da1dc1d3",
00:08:29.334        "strip_size_kb": 64,
00:08:29.334        "state": "online",
00:08:29.334        "raid_level": "raid0",
00:08:29.334        "superblock": false,
00:08:29.334        "num_base_bdevs": 2,
00:08:29.334        "num_base_bdevs_discovered": 2,
00:08:29.334        "num_base_bdevs_operational": 2,
00:08:29.334        "base_bdevs_list": [
00:08:29.334          {
00:08:29.334            "name": "BaseBdev1",
00:08:29.334            "uuid": "b3960ffd-e5b7-4bfb-b0ac-a20c7c5a3ad6",
00:08:29.334            "is_configured": true,
00:08:29.334            "data_offset": 0,
00:08:29.334            "data_size": 65536
00:08:29.334          },
00:08:29.334          {
00:08:29.334            "name": "BaseBdev2",
00:08:29.334            "uuid": "5206330b-673b-4fa6-9d4c-cb7e6c1e80d1",
00:08:29.334            "is_configured": true,
00:08:29.334            "data_offset": 0,
00:08:29.334            "data_size": 65536
00:08:29.334          }
00:08:29.334        ]
00:08:29.334      }
00:08:29.334    }
00:08:29.334  }'
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:08:29.334  BaseBdev2'
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.334  [2024-12-16 11:29:55.317731] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:08:29.334  [2024-12-16 11:29:55.317784] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:29.334  [2024-12-16 11:29:55.317859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.334    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:29.334    "name": "Existed_Raid",
00:08:29.334    "uuid": "cd264a26-a2d3-46d9-99ab-6842da1dc1d3",
00:08:29.334    "strip_size_kb": 64,
00:08:29.334    "state": "offline",
00:08:29.334    "raid_level": "raid0",
00:08:29.334    "superblock": false,
00:08:29.334    "num_base_bdevs": 2,
00:08:29.334    "num_base_bdevs_discovered": 1,
00:08:29.334    "num_base_bdevs_operational": 1,
00:08:29.334    "base_bdevs_list": [
00:08:29.334      {
00:08:29.334        "name": null,
00:08:29.334        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:29.334        "is_configured": false,
00:08:29.334        "data_offset": 0,
00:08:29.334        "data_size": 65536
00:08:29.334      },
00:08:29.334      {
00:08:29.334        "name": "BaseBdev2",
00:08:29.334        "uuid": "5206330b-673b-4fa6-9d4c-cb7e6c1e80d1",
00:08:29.334        "is_configured": true,
00:08:29.334        "data_offset": 0,
00:08:29.334        "data_size": 65536
00:08:29.334      }
00:08:29.334    ]
00:08:29.334  }'
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:29.334   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.903  [2024-12-16 11:29:55.893183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:08:29.903  [2024-12-16 11:29:55.893263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72342
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72342 ']'
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72342
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:08:29.903   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:29.903    11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72342
00:08:30.163   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:30.163   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:30.163  killing process with pid 72342
00:08:30.163   11:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72342'
00:08:30.163   11:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72342
00:08:30.163  [2024-12-16 11:29:56.001102] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:30.163   11:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72342
00:08:30.163  [2024-12-16 11:29:56.002187] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:08:30.422  
00:08:30.422  real	0m4.283s
00:08:30.422  user	0m6.805s
00:08:30.422  sys	0m0.841s
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:30.422  ************************************
00:08:30.422  END TEST raid_state_function_test
00:08:30.422  ************************************
00:08:30.422   11:29:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true
00:08:30.422   11:29:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:08:30.422   11:29:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:30.422   11:29:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:30.422  ************************************
00:08:30.422  START TEST raid_state_function_test_sb
00:08:30.422  ************************************
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:08:30.422   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:08:30.422    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:08:30.422    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:30.422    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:08:30.422    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:30.422    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:30.422    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:08:30.423    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:30.423    11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']'
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72584
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72584'
00:08:30.423  Process raid pid: 72584
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72584
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72584 ']'
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:30.423  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:30.423   11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:30.423  [2024-12-16 11:29:56.460201] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:30.423  [2024-12-16 11:29:56.460412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:30.682  [2024-12-16 11:29:56.621886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:30.682  [2024-12-16 11:29:56.675848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:30.682  [2024-12-16 11:29:56.720967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:30.682  [2024-12-16 11:29:56.721010] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:31.618   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:31.618   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.619  [2024-12-16 11:29:57.427367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:31.619  [2024-12-16 11:29:57.427422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:31.619  [2024-12-16 11:29:57.427452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:31.619  [2024-12-16 11:29:57.427465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:31.619    11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:31.619    11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:31.619    11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:31.619    11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.619    11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:31.619    "name": "Existed_Raid",
00:08:31.619    "uuid": "d339deb9-088b-4a78-abca-956a30e9fc0e",
00:08:31.619    "strip_size_kb": 64,
00:08:31.619    "state": "configuring",
00:08:31.619    "raid_level": "raid0",
00:08:31.619    "superblock": true,
00:08:31.619    "num_base_bdevs": 2,
00:08:31.619    "num_base_bdevs_discovered": 0,
00:08:31.619    "num_base_bdevs_operational": 2,
00:08:31.619    "base_bdevs_list": [
00:08:31.619      {
00:08:31.619        "name": "BaseBdev1",
00:08:31.619        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:31.619        "is_configured": false,
00:08:31.619        "data_offset": 0,
00:08:31.619        "data_size": 0
00:08:31.619      },
00:08:31.619      {
00:08:31.619        "name": "BaseBdev2",
00:08:31.619        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:31.619        "is_configured": false,
00:08:31.619        "data_offset": 0,
00:08:31.619        "data_size": 0
00:08:31.619      }
00:08:31.619    ]
00:08:31.619  }'
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:31.619   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.878   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:31.878   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:31.878   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.878  [2024-12-16 11:29:57.882479] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:31.878  [2024-12-16 11:29:57.882533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:08:31.878   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:31.878   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:31.878   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:31.878   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.878  [2024-12-16 11:29:57.894496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:31.878  [2024-12-16 11:29:57.894558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:31.878  [2024-12-16 11:29:57.894570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:31.879  [2024-12-16 11:29:57.894581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.879  [2024-12-16 11:29:57.916065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:31.879  BaseBdev1
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:31.879   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:31.879  [
00:08:31.879  {
00:08:31.879  "name": "BaseBdev1",
00:08:31.879  "aliases": [
00:08:31.879  "e4dc18c3-5f09-4cd5-9d34-8263c9c85de7"
00:08:31.879  ],
00:08:31.879  "product_name": "Malloc disk",
00:08:31.879  "block_size": 512,
00:08:31.879  "num_blocks": 65536,
00:08:31.879  "uuid": "e4dc18c3-5f09-4cd5-9d34-8263c9c85de7",
00:08:31.879  "assigned_rate_limits": {
00:08:32.137  "rw_ios_per_sec": 0,
00:08:32.137  "rw_mbytes_per_sec": 0,
00:08:32.137  "r_mbytes_per_sec": 0,
00:08:32.137  "w_mbytes_per_sec": 0
00:08:32.137  },
00:08:32.137  "claimed": true,
00:08:32.137  "claim_type": "exclusive_write",
00:08:32.137  "zoned": false,
00:08:32.137  "supported_io_types": {
00:08:32.137  "read": true,
00:08:32.138  "write": true,
00:08:32.138  "unmap": true,
00:08:32.138  "flush": true,
00:08:32.138  "reset": true,
00:08:32.138  "nvme_admin": false,
00:08:32.138  "nvme_io": false,
00:08:32.138  "nvme_io_md": false,
00:08:32.138  "write_zeroes": true,
00:08:32.138  "zcopy": true,
00:08:32.138  "get_zone_info": false,
00:08:32.138  "zone_management": false,
00:08:32.138  "zone_append": false,
00:08:32.138  "compare": false,
00:08:32.138  "compare_and_write": false,
00:08:32.138  "abort": true,
00:08:32.138  "seek_hole": false,
00:08:32.138  "seek_data": false,
00:08:32.138  "copy": true,
00:08:32.138  "nvme_iov_md": false
00:08:32.138  },
00:08:32.138  "memory_domains": [
00:08:32.138  {
00:08:32.138  "dma_device_id": "system",
00:08:32.138  "dma_device_type": 1
00:08:32.138  },
00:08:32.138  {
00:08:32.138  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:32.138  "dma_device_type": 2
00:08:32.138  }
00:08:32.138  ],
00:08:32.138  "driver_specific": {}
00:08:32.138  }
00:08:32.138  ]
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:32.138   11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:32.138    11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:32.138    11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.138    11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:32.138    11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.138    11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.138   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:32.138    "name": "Existed_Raid",
00:08:32.138    "uuid": "b4f5b135-d6bd-45a8-823b-c9055c5174e2",
00:08:32.138    "strip_size_kb": 64,
00:08:32.138    "state": "configuring",
00:08:32.138    "raid_level": "raid0",
00:08:32.138    "superblock": true,
00:08:32.138    "num_base_bdevs": 2,
00:08:32.138    "num_base_bdevs_discovered": 1,
00:08:32.138    "num_base_bdevs_operational": 2,
00:08:32.138    "base_bdevs_list": [
00:08:32.138      {
00:08:32.138        "name": "BaseBdev1",
00:08:32.138        "uuid": "e4dc18c3-5f09-4cd5-9d34-8263c9c85de7",
00:08:32.138        "is_configured": true,
00:08:32.138        "data_offset": 2048,
00:08:32.138        "data_size": 63488
00:08:32.138      },
00:08:32.138      {
00:08:32.138        "name": "BaseBdev2",
00:08:32.138        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:32.138        "is_configured": false,
00:08:32.138        "data_offset": 0,
00:08:32.138        "data_size": 0
00:08:32.138      }
00:08:32.138    ]
00:08:32.138  }'
00:08:32.138   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:32.138   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.397  [2024-12-16 11:29:58.431310] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:32.397  [2024-12-16 11:29:58.431373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.397  [2024-12-16 11:29:58.443334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:32.397  [2024-12-16 11:29:58.445599] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:32.397  [2024-12-16 11:29:58.445644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:32.397   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:32.397    11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:32.397    11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.397    11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.397    11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:32.657    11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.657   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:32.657    "name": "Existed_Raid",
00:08:32.657    "uuid": "acc41133-ce0a-4093-8470-2f34a0142c8a",
00:08:32.657    "strip_size_kb": 64,
00:08:32.657    "state": "configuring",
00:08:32.657    "raid_level": "raid0",
00:08:32.657    "superblock": true,
00:08:32.657    "num_base_bdevs": 2,
00:08:32.657    "num_base_bdevs_discovered": 1,
00:08:32.657    "num_base_bdevs_operational": 2,
00:08:32.657    "base_bdevs_list": [
00:08:32.657      {
00:08:32.657        "name": "BaseBdev1",
00:08:32.657        "uuid": "e4dc18c3-5f09-4cd5-9d34-8263c9c85de7",
00:08:32.657        "is_configured": true,
00:08:32.657        "data_offset": 2048,
00:08:32.657        "data_size": 63488
00:08:32.657      },
00:08:32.657      {
00:08:32.657        "name": "BaseBdev2",
00:08:32.657        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:32.657        "is_configured": false,
00:08:32.657        "data_offset": 0,
00:08:32.657        "data_size": 0
00:08:32.657      }
00:08:32.657    ]
00:08:32.657  }'
00:08:32.657   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:32.657   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.916  [2024-12-16 11:29:58.936452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:08:32.916  [2024-12-16 11:29:58.936741] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:32.916  [2024-12-16 11:29:58.936768] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:32.916  BaseBdev2
00:08:32.916  [2024-12-16 11:29:58.937215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:32.916  [2024-12-16 11:29:58.937429] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:32.916  [2024-12-16 11:29:58.937466] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:08:32.916  [2024-12-16 11:29:58.937657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.916   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.916  [
00:08:32.916  {
00:08:32.916  "name": "BaseBdev2",
00:08:32.916  "aliases": [
00:08:32.917  "46a7c844-a8f6-4f05-84dc-676a1814d8e4"
00:08:32.917  ],
00:08:32.917  "product_name": "Malloc disk",
00:08:32.917  "block_size": 512,
00:08:32.917  "num_blocks": 65536,
00:08:32.917  "uuid": "46a7c844-a8f6-4f05-84dc-676a1814d8e4",
00:08:32.917  "assigned_rate_limits": {
00:08:32.917  "rw_ios_per_sec": 0,
00:08:32.917  "rw_mbytes_per_sec": 0,
00:08:32.917  "r_mbytes_per_sec": 0,
00:08:32.917  "w_mbytes_per_sec": 0
00:08:32.917  },
00:08:32.917  "claimed": true,
00:08:32.917  "claim_type": "exclusive_write",
00:08:32.917  "zoned": false,
00:08:32.917  "supported_io_types": {
00:08:32.917  "read": true,
00:08:32.917  "write": true,
00:08:32.917  "unmap": true,
00:08:32.917  "flush": true,
00:08:32.917  "reset": true,
00:08:32.917  "nvme_admin": false,
00:08:32.917  "nvme_io": false,
00:08:32.917  "nvme_io_md": false,
00:08:32.917  "write_zeroes": true,
00:08:32.917  "zcopy": true,
00:08:32.917  "get_zone_info": false,
00:08:32.917  "zone_management": false,
00:08:32.917  "zone_append": false,
00:08:32.917  "compare": false,
00:08:32.917  "compare_and_write": false,
00:08:32.917  "abort": true,
00:08:32.917  "seek_hole": false,
00:08:32.917  "seek_data": false,
00:08:32.917  "copy": true,
00:08:32.917  "nvme_iov_md": false
00:08:32.917  },
00:08:32.917  "memory_domains": [
00:08:32.917  {
00:08:32.917  "dma_device_id": "system",
00:08:32.917  "dma_device_type": 1
00:08:32.917  },
00:08:32.917  {
00:08:32.917  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:32.917  "dma_device_type": 2
00:08:32.917  }
00:08:32.917  ],
00:08:32.917  "driver_specific": {}
00:08:32.917  }
00:08:32.917  ]
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:32.917   11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:32.917    11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:32.917    11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:32.917    11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:32.917    11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:33.176    11:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:33.176   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:33.176    "name": "Existed_Raid",
00:08:33.176    "uuid": "acc41133-ce0a-4093-8470-2f34a0142c8a",
00:08:33.176    "strip_size_kb": 64,
00:08:33.176    "state": "online",
00:08:33.176    "raid_level": "raid0",
00:08:33.176    "superblock": true,
00:08:33.176    "num_base_bdevs": 2,
00:08:33.176    "num_base_bdevs_discovered": 2,
00:08:33.176    "num_base_bdevs_operational": 2,
00:08:33.176    "base_bdevs_list": [
00:08:33.176      {
00:08:33.176        "name": "BaseBdev1",
00:08:33.176        "uuid": "e4dc18c3-5f09-4cd5-9d34-8263c9c85de7",
00:08:33.176        "is_configured": true,
00:08:33.176        "data_offset": 2048,
00:08:33.176        "data_size": 63488
00:08:33.176      },
00:08:33.176      {
00:08:33.176        "name": "BaseBdev2",
00:08:33.176        "uuid": "46a7c844-a8f6-4f05-84dc-676a1814d8e4",
00:08:33.176        "is_configured": true,
00:08:33.176        "data_offset": 2048,
00:08:33.176        "data_size": 63488
00:08:33.176      }
00:08:33.176    ]
00:08:33.176  }'
00:08:33.176   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:33.176   11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:33.435   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:08:33.435   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:08:33.435   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:33.435   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:33.435   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:08:33.435   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:33.435    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:08:33.435    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:33.435    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:33.435    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:33.435  [2024-12-16 11:29:59.440013] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:33.435    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:33.435   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:33.435    "name": "Existed_Raid",
00:08:33.435    "aliases": [
00:08:33.435      "acc41133-ce0a-4093-8470-2f34a0142c8a"
00:08:33.435    ],
00:08:33.435    "product_name": "Raid Volume",
00:08:33.435    "block_size": 512,
00:08:33.435    "num_blocks": 126976,
00:08:33.435    "uuid": "acc41133-ce0a-4093-8470-2f34a0142c8a",
00:08:33.435    "assigned_rate_limits": {
00:08:33.435      "rw_ios_per_sec": 0,
00:08:33.435      "rw_mbytes_per_sec": 0,
00:08:33.435      "r_mbytes_per_sec": 0,
00:08:33.435      "w_mbytes_per_sec": 0
00:08:33.435    },
00:08:33.435    "claimed": false,
00:08:33.435    "zoned": false,
00:08:33.435    "supported_io_types": {
00:08:33.435      "read": true,
00:08:33.435      "write": true,
00:08:33.435      "unmap": true,
00:08:33.435      "flush": true,
00:08:33.435      "reset": true,
00:08:33.435      "nvme_admin": false,
00:08:33.435      "nvme_io": false,
00:08:33.435      "nvme_io_md": false,
00:08:33.435      "write_zeroes": true,
00:08:33.435      "zcopy": false,
00:08:33.435      "get_zone_info": false,
00:08:33.435      "zone_management": false,
00:08:33.435      "zone_append": false,
00:08:33.435      "compare": false,
00:08:33.435      "compare_and_write": false,
00:08:33.435      "abort": false,
00:08:33.435      "seek_hole": false,
00:08:33.435      "seek_data": false,
00:08:33.435      "copy": false,
00:08:33.435      "nvme_iov_md": false
00:08:33.435    },
00:08:33.435    "memory_domains": [
00:08:33.435      {
00:08:33.435        "dma_device_id": "system",
00:08:33.435        "dma_device_type": 1
00:08:33.435      },
00:08:33.435      {
00:08:33.435        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:33.435        "dma_device_type": 2
00:08:33.435      },
00:08:33.435      {
00:08:33.435        "dma_device_id": "system",
00:08:33.435        "dma_device_type": 1
00:08:33.435      },
00:08:33.435      {
00:08:33.435        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:33.435        "dma_device_type": 2
00:08:33.435      }
00:08:33.435    ],
00:08:33.435    "driver_specific": {
00:08:33.435      "raid": {
00:08:33.435        "uuid": "acc41133-ce0a-4093-8470-2f34a0142c8a",
00:08:33.435        "strip_size_kb": 64,
00:08:33.435        "state": "online",
00:08:33.435        "raid_level": "raid0",
00:08:33.435        "superblock": true,
00:08:33.435        "num_base_bdevs": 2,
00:08:33.435        "num_base_bdevs_discovered": 2,
00:08:33.435        "num_base_bdevs_operational": 2,
00:08:33.435        "base_bdevs_list": [
00:08:33.435          {
00:08:33.435            "name": "BaseBdev1",
00:08:33.435            "uuid": "e4dc18c3-5f09-4cd5-9d34-8263c9c85de7",
00:08:33.435            "is_configured": true,
00:08:33.435            "data_offset": 2048,
00:08:33.435            "data_size": 63488
00:08:33.435          },
00:08:33.435          {
00:08:33.435            "name": "BaseBdev2",
00:08:33.435            "uuid": "46a7c844-a8f6-4f05-84dc-676a1814d8e4",
00:08:33.435            "is_configured": true,
00:08:33.435            "data_offset": 2048,
00:08:33.435            "data_size": 63488
00:08:33.435          }
00:08:33.435        ]
00:08:33.435      }
00:08:33.435    }
00:08:33.435  }'
00:08:33.435    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:33.693   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:08:33.693  BaseBdev2'
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:33.693   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:33.693   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:33.693   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:33.693   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:33.693   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:33.693    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:33.694  [2024-12-16 11:29:59.691502] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:08:33.694  [2024-12-16 11:29:59.691559] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:33.694  [2024-12-16 11:29:59.691631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:33.694    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:33.694    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:33.694    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:33.694    11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:33.694    11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:33.694    "name": "Existed_Raid",
00:08:33.694    "uuid": "acc41133-ce0a-4093-8470-2f34a0142c8a",
00:08:33.694    "strip_size_kb": 64,
00:08:33.694    "state": "offline",
00:08:33.694    "raid_level": "raid0",
00:08:33.694    "superblock": true,
00:08:33.694    "num_base_bdevs": 2,
00:08:33.694    "num_base_bdevs_discovered": 1,
00:08:33.694    "num_base_bdevs_operational": 1,
00:08:33.694    "base_bdevs_list": [
00:08:33.694      {
00:08:33.694        "name": null,
00:08:33.694        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:33.694        "is_configured": false,
00:08:33.694        "data_offset": 0,
00:08:33.694        "data_size": 63488
00:08:33.694      },
00:08:33.694      {
00:08:33.694        "name": "BaseBdev2",
00:08:33.694        "uuid": "46a7c844-a8f6-4f05-84dc-676a1814d8e4",
00:08:33.694        "is_configured": true,
00:08:33.694        "data_offset": 2048,
00:08:33.694        "data_size": 63488
00:08:33.694      }
00:08:33.694    ]
00:08:33.694  }'
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:33.694   11:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:34.259    11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:34.259    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:34.259    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:34.259    11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:08:34.259    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:34.259  [2024-12-16 11:30:00.234752] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:08:34.259  [2024-12-16 11:30:00.234818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:08:34.259   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:34.259    11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:08:34.259    11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:34.260    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:34.260    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:34.260    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:34.260   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:08:34.260   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:08:34.260   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:08:34.260   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72584
00:08:34.260   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72584 ']'
00:08:34.260   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72584
00:08:34.260    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:08:34.260   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:34.260    11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72584
00:08:34.518   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:34.518   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:34.518   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72584'
00:08:34.518  killing process with pid 72584
00:08:34.518   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72584
00:08:34.518  [2024-12-16 11:30:00.335215] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:34.518   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72584
00:08:34.518  [2024-12-16 11:30:00.336318] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:34.776   11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:08:34.777  
00:08:34.777  real	0m4.265s
00:08:34.777  user	0m6.793s
00:08:34.777  sys	0m0.823s
00:08:34.777   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:34.777   11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:34.777  ************************************
00:08:34.777  END TEST raid_state_function_test_sb
00:08:34.777  ************************************
00:08:34.777   11:30:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2
00:08:34.777   11:30:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:08:34.777   11:30:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:34.777   11:30:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:34.777  ************************************
00:08:34.777  START TEST raid_superblock_test
00:08:34.777  ************************************
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']'
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72831
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72831
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72831 ']'
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:34.777  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:34.777   11:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:34.777  [2024-12-16 11:30:00.749281] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:34.777  [2024-12-16 11:30:00.749449] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72831 ]
00:08:35.036  [2024-12-16 11:30:00.900322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:35.036  [2024-12-16 11:30:00.954685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:35.036  [2024-12-16 11:30:00.999894] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:35.036  [2024-12-16 11:30:00.999941] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:35.974  malloc1
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:35.974  [2024-12-16 11:30:01.700091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:08:35.974  [2024-12-16 11:30:01.700180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:35.974  [2024-12-16 11:30:01.700206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:08:35.974  [2024-12-16 11:30:01.700233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:35.974  [2024-12-16 11:30:01.702836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:35.974  [2024-12-16 11:30:01.702890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:08:35.974  pt1
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:35.974  malloc2
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:35.974  [2024-12-16 11:30:01.740211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:08:35.974  [2024-12-16 11:30:01.740280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:35.974  [2024-12-16 11:30:01.740302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:08:35.974  [2024-12-16 11:30:01.740315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:35.974  [2024-12-16 11:30:01.742924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:35.974  [2024-12-16 11:30:01.742969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:08:35.974  pt2
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:35.974  [2024-12-16 11:30:01.752259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:08:35.974  [2024-12-16 11:30:01.754449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:08:35.974  [2024-12-16 11:30:01.754648] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:35.974  [2024-12-16 11:30:01.754680] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:35.974  [2024-12-16 11:30:01.754982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:35.974  [2024-12-16 11:30:01.755131] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:35.974  [2024-12-16 11:30:01.755146] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:08:35.974  [2024-12-16 11:30:01.755313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:35.974    11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:35.974    11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:35.974    11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:35.974    11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:35.974    11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:35.974   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:35.974    "name": "raid_bdev1",
00:08:35.974    "uuid": "397b7d99-f37b-4892-831e-c9ecd3e89298",
00:08:35.974    "strip_size_kb": 64,
00:08:35.974    "state": "online",
00:08:35.974    "raid_level": "raid0",
00:08:35.974    "superblock": true,
00:08:35.974    "num_base_bdevs": 2,
00:08:35.974    "num_base_bdevs_discovered": 2,
00:08:35.974    "num_base_bdevs_operational": 2,
00:08:35.975    "base_bdevs_list": [
00:08:35.975      {
00:08:35.975        "name": "pt1",
00:08:35.975        "uuid": "00000000-0000-0000-0000-000000000001",
00:08:35.975        "is_configured": true,
00:08:35.975        "data_offset": 2048,
00:08:35.975        "data_size": 63488
00:08:35.975      },
00:08:35.975      {
00:08:35.975        "name": "pt2",
00:08:35.975        "uuid": "00000000-0000-0000-0000-000000000002",
00:08:35.975        "is_configured": true,
00:08:35.975        "data_offset": 2048,
00:08:35.975        "data_size": 63488
00:08:35.975      }
00:08:35.975    ]
00:08:35.975  }'
00:08:35.975   11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:35.975   11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.234   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:08:36.234   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:08:36.234   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:36.234   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:36.234   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:08:36.234   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:36.234    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:36.234    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:36.234    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.234    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.234  [2024-12-16 11:30:02.267942] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:36.234    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.493   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:36.493    "name": "raid_bdev1",
00:08:36.493    "aliases": [
00:08:36.493      "397b7d99-f37b-4892-831e-c9ecd3e89298"
00:08:36.493    ],
00:08:36.493    "product_name": "Raid Volume",
00:08:36.493    "block_size": 512,
00:08:36.493    "num_blocks": 126976,
00:08:36.493    "uuid": "397b7d99-f37b-4892-831e-c9ecd3e89298",
00:08:36.493    "assigned_rate_limits": {
00:08:36.493      "rw_ios_per_sec": 0,
00:08:36.493      "rw_mbytes_per_sec": 0,
00:08:36.493      "r_mbytes_per_sec": 0,
00:08:36.493      "w_mbytes_per_sec": 0
00:08:36.493    },
00:08:36.493    "claimed": false,
00:08:36.493    "zoned": false,
00:08:36.493    "supported_io_types": {
00:08:36.493      "read": true,
00:08:36.493      "write": true,
00:08:36.493      "unmap": true,
00:08:36.493      "flush": true,
00:08:36.493      "reset": true,
00:08:36.493      "nvme_admin": false,
00:08:36.493      "nvme_io": false,
00:08:36.493      "nvme_io_md": false,
00:08:36.493      "write_zeroes": true,
00:08:36.493      "zcopy": false,
00:08:36.493      "get_zone_info": false,
00:08:36.493      "zone_management": false,
00:08:36.493      "zone_append": false,
00:08:36.493      "compare": false,
00:08:36.493      "compare_and_write": false,
00:08:36.493      "abort": false,
00:08:36.493      "seek_hole": false,
00:08:36.493      "seek_data": false,
00:08:36.493      "copy": false,
00:08:36.493      "nvme_iov_md": false
00:08:36.493    },
00:08:36.493    "memory_domains": [
00:08:36.493      {
00:08:36.493        "dma_device_id": "system",
00:08:36.493        "dma_device_type": 1
00:08:36.493      },
00:08:36.493      {
00:08:36.493        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:36.493        "dma_device_type": 2
00:08:36.493      },
00:08:36.493      {
00:08:36.493        "dma_device_id": "system",
00:08:36.494        "dma_device_type": 1
00:08:36.494      },
00:08:36.494      {
00:08:36.494        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:36.494        "dma_device_type": 2
00:08:36.494      }
00:08:36.494    ],
00:08:36.494    "driver_specific": {
00:08:36.494      "raid": {
00:08:36.494        "uuid": "397b7d99-f37b-4892-831e-c9ecd3e89298",
00:08:36.494        "strip_size_kb": 64,
00:08:36.494        "state": "online",
00:08:36.494        "raid_level": "raid0",
00:08:36.494        "superblock": true,
00:08:36.494        "num_base_bdevs": 2,
00:08:36.494        "num_base_bdevs_discovered": 2,
00:08:36.494        "num_base_bdevs_operational": 2,
00:08:36.494        "base_bdevs_list": [
00:08:36.494          {
00:08:36.494            "name": "pt1",
00:08:36.494            "uuid": "00000000-0000-0000-0000-000000000001",
00:08:36.494            "is_configured": true,
00:08:36.494            "data_offset": 2048,
00:08:36.494            "data_size": 63488
00:08:36.494          },
00:08:36.494          {
00:08:36.494            "name": "pt2",
00:08:36.494            "uuid": "00000000-0000-0000-0000-000000000002",
00:08:36.494            "is_configured": true,
00:08:36.494            "data_offset": 2048,
00:08:36.494            "data_size": 63488
00:08:36.494          }
00:08:36.494        ]
00:08:36.494      }
00:08:36.494    }
00:08:36.494  }'
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:08:36.494  pt2'
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:08:36.494  [2024-12-16 11:30:02.511603] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=397b7d99-f37b-4892-831e-c9ecd3e89298
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 397b7d99-f37b-4892-831e-c9ecd3e89298 ']'
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.494  [2024-12-16 11:30:02.543219] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:08:36.494  [2024-12-16 11:30:02.543256] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:36.494  [2024-12-16 11:30:02.543375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:36.494  [2024-12-16 11:30:02.543436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:36.494  [2024-12-16 11:30:02.543464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:08:36.494   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.494    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.753    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.753    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:08:36.753    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:08:36.753    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.753    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.753    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:36.753    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.753  [2024-12-16 11:30:02.667059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:08:36.753  [2024-12-16 11:30:02.669298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:08:36.753  [2024-12-16 11:30:02.669386] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:08:36.753  [2024-12-16 11:30:02.669442] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:08:36.753  [2024-12-16 11:30:02.669462] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:08:36.753  [2024-12-16 11:30:02.669472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:08:36.753  request:
00:08:36.753  {
00:08:36.753  "name": "raid_bdev1",
00:08:36.753  "raid_level": "raid0",
00:08:36.753  "base_bdevs": [
00:08:36.753  "malloc1",
00:08:36.753  "malloc2"
00:08:36.753  ],
00:08:36.753  "strip_size_kb": 64,
00:08:36.753  "superblock": false,
00:08:36.753  "method": "bdev_raid_create",
00:08:36.753  "req_id": 1
00:08:36.753  }
00:08:36.753  Got JSON-RPC error response
00:08:36.753  response:
00:08:36.753  {
00:08:36.753  "code": -17,
00:08:36.753  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:08:36.753  }
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:36.753   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.754  [2024-12-16 11:30:02.730895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:08:36.754  [2024-12-16 11:30:02.731015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:36.754  [2024-12-16 11:30:02.731067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:08:36.754  [2024-12-16 11:30:02.731101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:36.754  [2024-12-16 11:30:02.733632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:36.754  [2024-12-16 11:30:02.733715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:08:36.754  [2024-12-16 11:30:02.733836] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:08:36.754  [2024-12-16 11:30:02.733918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:08:36.754  pt1
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:36.754    11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:36.754    "name": "raid_bdev1",
00:08:36.754    "uuid": "397b7d99-f37b-4892-831e-c9ecd3e89298",
00:08:36.754    "strip_size_kb": 64,
00:08:36.754    "state": "configuring",
00:08:36.754    "raid_level": "raid0",
00:08:36.754    "superblock": true,
00:08:36.754    "num_base_bdevs": 2,
00:08:36.754    "num_base_bdevs_discovered": 1,
00:08:36.754    "num_base_bdevs_operational": 2,
00:08:36.754    "base_bdevs_list": [
00:08:36.754      {
00:08:36.754        "name": "pt1",
00:08:36.754        "uuid": "00000000-0000-0000-0000-000000000001",
00:08:36.754        "is_configured": true,
00:08:36.754        "data_offset": 2048,
00:08:36.754        "data_size": 63488
00:08:36.754      },
00:08:36.754      {
00:08:36.754        "name": null,
00:08:36.754        "uuid": "00000000-0000-0000-0000-000000000002",
00:08:36.754        "is_configured": false,
00:08:36.754        "data_offset": 2048,
00:08:36.754        "data_size": 63488
00:08:36.754      }
00:08:36.754    ]
00:08:36.754  }'
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:36.754   11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']'
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.322  [2024-12-16 11:30:03.178182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:08:37.322  [2024-12-16 11:30:03.178263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:37.322  [2024-12-16 11:30:03.178293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:08:37.322  [2024-12-16 11:30:03.178304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:37.322  [2024-12-16 11:30:03.178791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:37.322  [2024-12-16 11:30:03.178812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:08:37.322  [2024-12-16 11:30:03.178897] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:08:37.322  [2024-12-16 11:30:03.178924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:08:37.322  [2024-12-16 11:30:03.179025] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:37.322  [2024-12-16 11:30:03.179042] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:37.322  [2024-12-16 11:30:03.179325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:08:37.322  [2024-12-16 11:30:03.179457] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:37.322  [2024-12-16 11:30:03.179475] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:08:37.322  [2024-12-16 11:30:03.179610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:37.322  pt2
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:37.322    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:37.322    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:37.322    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:37.322    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.322    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:37.322    "name": "raid_bdev1",
00:08:37.322    "uuid": "397b7d99-f37b-4892-831e-c9ecd3e89298",
00:08:37.322    "strip_size_kb": 64,
00:08:37.322    "state": "online",
00:08:37.322    "raid_level": "raid0",
00:08:37.322    "superblock": true,
00:08:37.322    "num_base_bdevs": 2,
00:08:37.322    "num_base_bdevs_discovered": 2,
00:08:37.322    "num_base_bdevs_operational": 2,
00:08:37.322    "base_bdevs_list": [
00:08:37.322      {
00:08:37.322        "name": "pt1",
00:08:37.322        "uuid": "00000000-0000-0000-0000-000000000001",
00:08:37.322        "is_configured": true,
00:08:37.322        "data_offset": 2048,
00:08:37.322        "data_size": 63488
00:08:37.322      },
00:08:37.322      {
00:08:37.322        "name": "pt2",
00:08:37.322        "uuid": "00000000-0000-0000-0000-000000000002",
00:08:37.322        "is_configured": true,
00:08:37.322        "data_offset": 2048,
00:08:37.322        "data_size": 63488
00:08:37.322      }
00:08:37.322    ]
00:08:37.322  }'
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:37.322   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.580   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:08:37.580   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:08:37.580   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:37.580   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:37.580   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:08:37.581   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:37.581    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:37.581    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:37.581    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:37.581    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.581  [2024-12-16 11:30:03.597888] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:37.581    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:37.581   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:37.581    "name": "raid_bdev1",
00:08:37.581    "aliases": [
00:08:37.581      "397b7d99-f37b-4892-831e-c9ecd3e89298"
00:08:37.581    ],
00:08:37.581    "product_name": "Raid Volume",
00:08:37.581    "block_size": 512,
00:08:37.581    "num_blocks": 126976,
00:08:37.581    "uuid": "397b7d99-f37b-4892-831e-c9ecd3e89298",
00:08:37.581    "assigned_rate_limits": {
00:08:37.581      "rw_ios_per_sec": 0,
00:08:37.581      "rw_mbytes_per_sec": 0,
00:08:37.581      "r_mbytes_per_sec": 0,
00:08:37.581      "w_mbytes_per_sec": 0
00:08:37.581    },
00:08:37.581    "claimed": false,
00:08:37.581    "zoned": false,
00:08:37.581    "supported_io_types": {
00:08:37.581      "read": true,
00:08:37.581      "write": true,
00:08:37.581      "unmap": true,
00:08:37.581      "flush": true,
00:08:37.581      "reset": true,
00:08:37.581      "nvme_admin": false,
00:08:37.581      "nvme_io": false,
00:08:37.581      "nvme_io_md": false,
00:08:37.581      "write_zeroes": true,
00:08:37.581      "zcopy": false,
00:08:37.581      "get_zone_info": false,
00:08:37.581      "zone_management": false,
00:08:37.581      "zone_append": false,
00:08:37.581      "compare": false,
00:08:37.581      "compare_and_write": false,
00:08:37.581      "abort": false,
00:08:37.581      "seek_hole": false,
00:08:37.581      "seek_data": false,
00:08:37.581      "copy": false,
00:08:37.581      "nvme_iov_md": false
00:08:37.581    },
00:08:37.581    "memory_domains": [
00:08:37.581      {
00:08:37.581        "dma_device_id": "system",
00:08:37.581        "dma_device_type": 1
00:08:37.581      },
00:08:37.581      {
00:08:37.581        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:37.581        "dma_device_type": 2
00:08:37.581      },
00:08:37.581      {
00:08:37.581        "dma_device_id": "system",
00:08:37.581        "dma_device_type": 1
00:08:37.581      },
00:08:37.581      {
00:08:37.581        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:37.581        "dma_device_type": 2
00:08:37.581      }
00:08:37.581    ],
00:08:37.581    "driver_specific": {
00:08:37.581      "raid": {
00:08:37.581        "uuid": "397b7d99-f37b-4892-831e-c9ecd3e89298",
00:08:37.581        "strip_size_kb": 64,
00:08:37.581        "state": "online",
00:08:37.581        "raid_level": "raid0",
00:08:37.581        "superblock": true,
00:08:37.581        "num_base_bdevs": 2,
00:08:37.581        "num_base_bdevs_discovered": 2,
00:08:37.581        "num_base_bdevs_operational": 2,
00:08:37.581        "base_bdevs_list": [
00:08:37.581          {
00:08:37.581            "name": "pt1",
00:08:37.581            "uuid": "00000000-0000-0000-0000-000000000001",
00:08:37.581            "is_configured": true,
00:08:37.581            "data_offset": 2048,
00:08:37.581            "data_size": 63488
00:08:37.581          },
00:08:37.581          {
00:08:37.581            "name": "pt2",
00:08:37.581            "uuid": "00000000-0000-0000-0000-000000000002",
00:08:37.581            "is_configured": true,
00:08:37.581            "data_offset": 2048,
00:08:37.581            "data_size": 63488
00:08:37.581          }
00:08:37.581        ]
00:08:37.581      }
00:08:37.581    }
00:08:37.581  }'
00:08:37.581    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:08:37.892  pt2'
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:37.892  [2024-12-16 11:30:03.805455] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 397b7d99-f37b-4892-831e-c9ecd3e89298 '!=' 397b7d99-f37b-4892-831e-c9ecd3e89298 ']'
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72831
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72831 ']'
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72831
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:37.892    11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72831
00:08:37.892  killing process with pid 72831
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72831'
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72831
00:08:37.892  [2024-12-16 11:30:03.891741] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:37.892  [2024-12-16 11:30:03.891843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:37.892  [2024-12-16 11:30:03.891901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:37.892  [2024-12-16 11:30:03.891913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:08:37.892   11:30:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72831
00:08:37.892  [2024-12-16 11:30:03.916877] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:38.150   11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:08:38.150  
00:08:38.150  real	0m3.507s
00:08:38.150  user	0m5.459s
00:08:38.150  sys	0m0.693s
00:08:38.150   11:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:38.150  ************************************
00:08:38.150  END TEST raid_superblock_test
00:08:38.150  ************************************
00:08:38.150   11:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:38.408   11:30:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read
00:08:38.408   11:30:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:08:38.408   11:30:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:38.408   11:30:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:38.408  ************************************
00:08:38.408  START TEST raid_read_error_test
00:08:38.408  ************************************
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:08:38.408    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:08:38.408   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']'
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:08:38.409    11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qX7nfRElsx
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73026
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73026
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73026 ']'
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:38.409  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:38.409   11:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:38.409  [2024-12-16 11:30:04.341078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:38.409  [2024-12-16 11:30:04.341307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73026 ]
00:08:38.667  [2024-12-16 11:30:04.507420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:38.667  [2024-12-16 11:30:04.561108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:38.667  [2024-12-16 11:30:04.607113] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:38.667  [2024-12-16 11:30:04.607155] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.245  BaseBdev1_malloc
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.245  true
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.245  [2024-12-16 11:30:05.291447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:08:39.245  [2024-12-16 11:30:05.291520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:39.245  [2024-12-16 11:30:05.291571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:08:39.245  [2024-12-16 11:30:05.291583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:39.245  [2024-12-16 11:30:05.294077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:39.245  [2024-12-16 11:30:05.294174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:08:39.245  BaseBdev1
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.245   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.506  BaseBdev2_malloc
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.506  true
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.506  [2024-12-16 11:30:05.342889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:08:39.506  [2024-12-16 11:30:05.342953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:39.506  [2024-12-16 11:30:05.342977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:08:39.506  [2024-12-16 11:30:05.342987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:39.506  [2024-12-16 11:30:05.345479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:39.506  [2024-12-16 11:30:05.345522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:08:39.506  BaseBdev2
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.506   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.506  [2024-12-16 11:30:05.354919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:39.506  [2024-12-16 11:30:05.357158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:08:39.506  [2024-12-16 11:30:05.357358] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:39.506  [2024-12-16 11:30:05.357374] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:39.507  [2024-12-16 11:30:05.357688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:39.507  [2024-12-16 11:30:05.357835] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:39.507  [2024-12-16 11:30:05.357849] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:08:39.507  [2024-12-16 11:30:05.358009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:39.507    11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:39.507    11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:39.507    11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:39.507    11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.507    11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:39.507    "name": "raid_bdev1",
00:08:39.507    "uuid": "b484c683-a0e7-4825-b982-c03a295d95db",
00:08:39.507    "strip_size_kb": 64,
00:08:39.507    "state": "online",
00:08:39.507    "raid_level": "raid0",
00:08:39.507    "superblock": true,
00:08:39.507    "num_base_bdevs": 2,
00:08:39.507    "num_base_bdevs_discovered": 2,
00:08:39.507    "num_base_bdevs_operational": 2,
00:08:39.507    "base_bdevs_list": [
00:08:39.507      {
00:08:39.507        "name": "BaseBdev1",
00:08:39.507        "uuid": "71204963-1351-5eb6-8d1a-052cfd484d9c",
00:08:39.507        "is_configured": true,
00:08:39.507        "data_offset": 2048,
00:08:39.507        "data_size": 63488
00:08:39.507      },
00:08:39.507      {
00:08:39.507        "name": "BaseBdev2",
00:08:39.507        "uuid": "a784a17c-469a-5ab5-b617-e604ea0122e8",
00:08:39.507        "is_configured": true,
00:08:39.507        "data_offset": 2048,
00:08:39.507        "data_size": 63488
00:08:39.507      }
00:08:39.507    ]
00:08:39.507  }'
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:39.507   11:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:39.766   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:08:39.766   11:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:08:40.024  [2024-12-16 11:30:05.910502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]]
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:40.961    11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:40.961    11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:40.961    11:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:40.961    11:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:40.961    11:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:40.961    "name": "raid_bdev1",
00:08:40.961    "uuid": "b484c683-a0e7-4825-b982-c03a295d95db",
00:08:40.961    "strip_size_kb": 64,
00:08:40.961    "state": "online",
00:08:40.961    "raid_level": "raid0",
00:08:40.961    "superblock": true,
00:08:40.961    "num_base_bdevs": 2,
00:08:40.961    "num_base_bdevs_discovered": 2,
00:08:40.961    "num_base_bdevs_operational": 2,
00:08:40.961    "base_bdevs_list": [
00:08:40.961      {
00:08:40.961        "name": "BaseBdev1",
00:08:40.961        "uuid": "71204963-1351-5eb6-8d1a-052cfd484d9c",
00:08:40.961        "is_configured": true,
00:08:40.961        "data_offset": 2048,
00:08:40.961        "data_size": 63488
00:08:40.961      },
00:08:40.961      {
00:08:40.961        "name": "BaseBdev2",
00:08:40.961        "uuid": "a784a17c-469a-5ab5-b617-e604ea0122e8",
00:08:40.961        "is_configured": true,
00:08:40.961        "data_offset": 2048,
00:08:40.961        "data_size": 63488
00:08:40.961      }
00:08:40.961    ]
00:08:40.961  }'
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:40.961   11:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:41.220   11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:08:41.220   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:41.220   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:41.220  [2024-12-16 11:30:07.272070] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:08:41.220  [2024-12-16 11:30:07.272165] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:41.220  [2024-12-16 11:30:07.275343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:41.220  [2024-12-16 11:30:07.275447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:41.220  [2024-12-16 11:30:07.275511] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:41.220  [2024-12-16 11:30:07.275602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:08:41.220   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:41.220   11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73026
00:08:41.220   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73026 ']'
00:08:41.220  {
00:08:41.220    "results": [
00:08:41.220      {
00:08:41.220        "job": "raid_bdev1",
00:08:41.220        "core_mask": "0x1",
00:08:41.220        "workload": "randrw",
00:08:41.220        "percentage": 50,
00:08:41.220        "status": "finished",
00:08:41.220        "queue_depth": 1,
00:08:41.220        "io_size": 131072,
00:08:41.220        "runtime": 1.362076,
00:08:41.220        "iops": 13791.447760624224,
00:08:41.220        "mibps": 1723.930970078028,
00:08:41.220        "io_failed": 1,
00:08:41.220        "io_timeout": 0,
00:08:41.220        "avg_latency_us": 100.09901975688483,
00:08:41.220        "min_latency_us": 30.406986899563318,
00:08:41.220        "max_latency_us": 1781.4917030567685
00:08:41.220      }
00:08:41.220    ],
00:08:41.220    "core_count": 1
00:08:41.220  }
00:08:41.220   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73026
00:08:41.220    11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:08:41.479   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:41.479    11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73026
00:08:41.479   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:41.479   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:41.479   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73026'
00:08:41.479  killing process with pid 73026
00:08:41.479   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73026
00:08:41.479  [2024-12-16 11:30:07.324475] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:41.479   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73026
00:08:41.479  [2024-12-16 11:30:07.341108] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:41.738    11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qX7nfRElsx
00:08:41.738    11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:08:41.738    11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:08:41.738   11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73
00:08:41.738   11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0
00:08:41.738   11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:41.738   11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:08:41.738   11:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]]
00:08:41.738  
00:08:41.738  real	0m3.370s
00:08:41.738  user	0m4.309s
00:08:41.738  sys	0m0.573s
00:08:41.738   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:41.738  ************************************
00:08:41.738  END TEST raid_read_error_test
00:08:41.738  ************************************
00:08:41.738   11:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:41.738   11:30:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write
00:08:41.738   11:30:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:08:41.738   11:30:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:41.738   11:30:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:41.738  ************************************
00:08:41.738  START TEST raid_write_error_test
00:08:41.738  ************************************
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']'
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:08:41.738    11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XPzfVBLbos
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73155
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73155
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73155 ']'
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:41.738  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:41.738   11:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:41.738  [2024-12-16 11:30:07.785205] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:41.738  [2024-12-16 11:30:07.785421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73155 ]
00:08:41.997  [2024-12-16 11:30:07.950701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:41.997  [2024-12-16 11:30:08.002409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:41.997  [2024-12-16 11:30:08.047906] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:41.997  [2024-12-16 11:30:08.048033] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.930  BaseBdev1_malloc
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.930  true
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.930  [2024-12-16 11:30:08.723994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:08:42.930  [2024-12-16 11:30:08.724109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:42.930  [2024-12-16 11:30:08.724139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:08:42.930  [2024-12-16 11:30:08.724154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:42.930  [2024-12-16 11:30:08.726677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:42.930  [2024-12-16 11:30:08.726719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:08:42.930  BaseBdev1
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.930  BaseBdev2_malloc
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.930  true
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.930   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.930  [2024-12-16 11:30:08.775916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:08:42.930  [2024-12-16 11:30:08.776026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:42.930  [2024-12-16 11:30:08.776070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:08:42.930  [2024-12-16 11:30:08.776120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:42.930  [2024-12-16 11:30:08.778593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:42.931  [2024-12-16 11:30:08.778674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:08:42.931  BaseBdev2
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.931  [2024-12-16 11:30:08.787931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:42.931  [2024-12-16 11:30:08.790132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:08:42.931  [2024-12-16 11:30:08.790384] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:42.931  [2024-12-16 11:30:08.790440] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:42.931  [2024-12-16 11:30:08.790804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:42.931  [2024-12-16 11:30:08.791008] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:42.931  [2024-12-16 11:30:08.791062] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:08:42.931  [2024-12-16 11:30:08.791271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:42.931    11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:42.931    11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:42.931    11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:42.931    11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.931    11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:42.931    "name": "raid_bdev1",
00:08:42.931    "uuid": "178adb8e-63a7-44cf-a744-5cbeb634a46f",
00:08:42.931    "strip_size_kb": 64,
00:08:42.931    "state": "online",
00:08:42.931    "raid_level": "raid0",
00:08:42.931    "superblock": true,
00:08:42.931    "num_base_bdevs": 2,
00:08:42.931    "num_base_bdevs_discovered": 2,
00:08:42.931    "num_base_bdevs_operational": 2,
00:08:42.931    "base_bdevs_list": [
00:08:42.931      {
00:08:42.931        "name": "BaseBdev1",
00:08:42.931        "uuid": "e4c92099-8c14-5edc-9bbc-09d1707d4444",
00:08:42.931        "is_configured": true,
00:08:42.931        "data_offset": 2048,
00:08:42.931        "data_size": 63488
00:08:42.931      },
00:08:42.931      {
00:08:42.931        "name": "BaseBdev2",
00:08:42.931        "uuid": "ecf98fff-e0e0-56a5-b112-85a42856a18d",
00:08:42.931        "is_configured": true,
00:08:42.931        "data_offset": 2048,
00:08:42.931        "data_size": 63488
00:08:42.931      }
00:08:42.931    ]
00:08:42.931  }'
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:42.931   11:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:43.499   11:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:08:43.499   11:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:08:43.499  [2024-12-16 11:30:09.367461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]]
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:44.436   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:44.436    11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:44.436    11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:44.436    11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:44.437    11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:44.437    11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:44.437   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:44.437    "name": "raid_bdev1",
00:08:44.437    "uuid": "178adb8e-63a7-44cf-a744-5cbeb634a46f",
00:08:44.437    "strip_size_kb": 64,
00:08:44.437    "state": "online",
00:08:44.437    "raid_level": "raid0",
00:08:44.437    "superblock": true,
00:08:44.437    "num_base_bdevs": 2,
00:08:44.437    "num_base_bdevs_discovered": 2,
00:08:44.437    "num_base_bdevs_operational": 2,
00:08:44.437    "base_bdevs_list": [
00:08:44.437      {
00:08:44.437        "name": "BaseBdev1",
00:08:44.437        "uuid": "e4c92099-8c14-5edc-9bbc-09d1707d4444",
00:08:44.437        "is_configured": true,
00:08:44.437        "data_offset": 2048,
00:08:44.437        "data_size": 63488
00:08:44.437      },
00:08:44.437      {
00:08:44.437        "name": "BaseBdev2",
00:08:44.437        "uuid": "ecf98fff-e0e0-56a5-b112-85a42856a18d",
00:08:44.437        "is_configured": true,
00:08:44.437        "data_offset": 2048,
00:08:44.437        "data_size": 63488
00:08:44.437      }
00:08:44.437    ]
00:08:44.437  }'
00:08:44.437   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:44.437   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:44.696   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:08:44.696   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:44.696   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:44.696  [2024-12-16 11:30:10.753030] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:08:44.696  [2024-12-16 11:30:10.753134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:44.696  [2024-12-16 11:30:10.756235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:44.696  [2024-12-16 11:30:10.756330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:44.696  [2024-12-16 11:30:10.756395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:44.696  [2024-12-16 11:30:10.756447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:08:44.696  {
00:08:44.696    "results": [
00:08:44.696      {
00:08:44.696        "job": "raid_bdev1",
00:08:44.696        "core_mask": "0x1",
00:08:44.696        "workload": "randrw",
00:08:44.696        "percentage": 50,
00:08:44.696        "status": "finished",
00:08:44.696        "queue_depth": 1,
00:08:44.696        "io_size": 131072,
00:08:44.696        "runtime": 1.386279,
00:08:44.696        "iops": 13790.874708482203,
00:08:44.696        "mibps": 1723.8593385602753,
00:08:44.696        "io_failed": 1,
00:08:44.696        "io_timeout": 0,
00:08:44.696        "avg_latency_us": 100.07671392069572,
00:08:44.696        "min_latency_us": 30.183406113537117,
00:08:44.696        "max_latency_us": 1731.4096069868995
00:08:44.696      }
00:08:44.696    ],
00:08:44.696    "core_count": 1
00:08:44.696  }
00:08:44.696   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:44.696   11:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73155
00:08:44.696   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73155 ']'
00:08:44.696   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73155
00:08:44.955    11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:08:44.955   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:44.955    11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73155
00:08:44.955  killing process with pid 73155
00:08:44.956   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:44.956   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:44.956   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73155'
00:08:44.956   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73155
00:08:44.956  [2024-12-16 11:30:10.802648] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:44.956   11:30:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73155
00:08:44.956  [2024-12-16 11:30:10.819095] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:45.215    11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:08:45.215    11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XPzfVBLbos
00:08:45.215    11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:08:45.215   11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72
00:08:45.215   11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0
00:08:45.215   11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:45.215   11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:08:45.215   11:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]]
00:08:45.215  
00:08:45.215  real	0m3.404s
00:08:45.215  user	0m4.371s
00:08:45.215  sys	0m0.572s
00:08:45.215   11:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:45.215  ************************************
00:08:45.215  END TEST raid_write_error_test
00:08:45.215  ************************************
00:08:45.215   11:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:45.215   11:30:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:08:45.215   11:30:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false
00:08:45.215   11:30:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:08:45.215   11:30:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:45.215   11:30:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:45.215  ************************************
00:08:45.215  START TEST raid_state_function_test
00:08:45.215  ************************************
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:45.215    11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']'
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73293
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:45.215  Process raid pid: 73293
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73293'
00:08:45.215  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73293
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73293 ']'
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:45.215   11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:45.215  [2024-12-16 11:30:11.247728] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:45.215  [2024-12-16 11:30:11.247977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:45.474  [2024-12-16 11:30:11.413306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:45.474  [2024-12-16 11:30:11.465043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:45.474  [2024-12-16 11:30:11.510969] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:45.474  [2024-12-16 11:30:11.511100] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.411  [2024-12-16 11:30:12.173825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:46.411  [2024-12-16 11:30:12.173950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:46.411  [2024-12-16 11:30:12.174011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:46.411  [2024-12-16 11:30:12.174051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:46.411    11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:46.411    11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.411    11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.411    11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:46.411    11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.411   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:46.411    "name": "Existed_Raid",
00:08:46.411    "uuid": "00000000-0000-0000-0000-000000000000",
00:08:46.411    "strip_size_kb": 64,
00:08:46.411    "state": "configuring",
00:08:46.411    "raid_level": "concat",
00:08:46.411    "superblock": false,
00:08:46.411    "num_base_bdevs": 2,
00:08:46.411    "num_base_bdevs_discovered": 0,
00:08:46.411    "num_base_bdevs_operational": 2,
00:08:46.411    "base_bdevs_list": [
00:08:46.411      {
00:08:46.411        "name": "BaseBdev1",
00:08:46.411        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:46.411        "is_configured": false,
00:08:46.411        "data_offset": 0,
00:08:46.411        "data_size": 0
00:08:46.411      },
00:08:46.411      {
00:08:46.411        "name": "BaseBdev2",
00:08:46.411        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:46.411        "is_configured": false,
00:08:46.411        "data_offset": 0,
00:08:46.411        "data_size": 0
00:08:46.411      }
00:08:46.411    ]
00:08:46.412  }'
00:08:46.412   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:46.412   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.671  [2024-12-16 11:30:12.648927] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:46.671  [2024-12-16 11:30:12.649041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.671  [2024-12-16 11:30:12.660952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:46.671  [2024-12-16 11:30:12.661002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:46.671  [2024-12-16 11:30:12.661013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:46.671  [2024-12-16 11:30:12.661024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.671  [2024-12-16 11:30:12.682535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:46.671  BaseBdev1
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.671   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.671  [
00:08:46.671  {
00:08:46.671  "name": "BaseBdev1",
00:08:46.671  "aliases": [
00:08:46.671  "061d1442-e7ef-40b6-99c7-e1abd5e982b9"
00:08:46.671  ],
00:08:46.671  "product_name": "Malloc disk",
00:08:46.671  "block_size": 512,
00:08:46.671  "num_blocks": 65536,
00:08:46.671  "uuid": "061d1442-e7ef-40b6-99c7-e1abd5e982b9",
00:08:46.671  "assigned_rate_limits": {
00:08:46.671  "rw_ios_per_sec": 0,
00:08:46.671  "rw_mbytes_per_sec": 0,
00:08:46.671  "r_mbytes_per_sec": 0,
00:08:46.671  "w_mbytes_per_sec": 0
00:08:46.672  },
00:08:46.672  "claimed": true,
00:08:46.672  "claim_type": "exclusive_write",
00:08:46.672  "zoned": false,
00:08:46.672  "supported_io_types": {
00:08:46.672  "read": true,
00:08:46.672  "write": true,
00:08:46.672  "unmap": true,
00:08:46.672  "flush": true,
00:08:46.672  "reset": true,
00:08:46.672  "nvme_admin": false,
00:08:46.672  "nvme_io": false,
00:08:46.672  "nvme_io_md": false,
00:08:46.672  "write_zeroes": true,
00:08:46.672  "zcopy": true,
00:08:46.672  "get_zone_info": false,
00:08:46.672  "zone_management": false,
00:08:46.672  "zone_append": false,
00:08:46.672  "compare": false,
00:08:46.672  "compare_and_write": false,
00:08:46.672  "abort": true,
00:08:46.672  "seek_hole": false,
00:08:46.672  "seek_data": false,
00:08:46.672  "copy": true,
00:08:46.672  "nvme_iov_md": false
00:08:46.672  },
00:08:46.672  "memory_domains": [
00:08:46.672  {
00:08:46.672  "dma_device_id": "system",
00:08:46.672  "dma_device_type": 1
00:08:46.672  },
00:08:46.672  {
00:08:46.672  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:46.672  "dma_device_type": 2
00:08:46.672  }
00:08:46.672  ],
00:08:46.672  "driver_specific": {}
00:08:46.672  }
00:08:46.672  ]
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:46.672   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:46.672    11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:46.672    11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:46.672    11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:46.672    11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:46.930    11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:46.930   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:46.930    "name": "Existed_Raid",
00:08:46.930    "uuid": "00000000-0000-0000-0000-000000000000",
00:08:46.930    "strip_size_kb": 64,
00:08:46.930    "state": "configuring",
00:08:46.930    "raid_level": "concat",
00:08:46.930    "superblock": false,
00:08:46.930    "num_base_bdevs": 2,
00:08:46.930    "num_base_bdevs_discovered": 1,
00:08:46.930    "num_base_bdevs_operational": 2,
00:08:46.930    "base_bdevs_list": [
00:08:46.930      {
00:08:46.930        "name": "BaseBdev1",
00:08:46.930        "uuid": "061d1442-e7ef-40b6-99c7-e1abd5e982b9",
00:08:46.930        "is_configured": true,
00:08:46.930        "data_offset": 0,
00:08:46.930        "data_size": 65536
00:08:46.930      },
00:08:46.930      {
00:08:46.930        "name": "BaseBdev2",
00:08:46.930        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:46.930        "is_configured": false,
00:08:46.930        "data_offset": 0,
00:08:46.930        "data_size": 0
00:08:46.930      }
00:08:46.930    ]
00:08:46.930  }'
00:08:46.930   11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:46.930   11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.189  [2024-12-16 11:30:13.185731] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:47.189  [2024-12-16 11:30:13.185850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.189  [2024-12-16 11:30:13.197737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:47.189  [2024-12-16 11:30:13.199971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:47.189  [2024-12-16 11:30:13.200019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:47.189   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:47.189    11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:47.189    11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:47.189    11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:47.189    11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.189    11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:47.448   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:47.448    "name": "Existed_Raid",
00:08:47.448    "uuid": "00000000-0000-0000-0000-000000000000",
00:08:47.448    "strip_size_kb": 64,
00:08:47.448    "state": "configuring",
00:08:47.448    "raid_level": "concat",
00:08:47.448    "superblock": false,
00:08:47.448    "num_base_bdevs": 2,
00:08:47.448    "num_base_bdevs_discovered": 1,
00:08:47.448    "num_base_bdevs_operational": 2,
00:08:47.448    "base_bdevs_list": [
00:08:47.448      {
00:08:47.448        "name": "BaseBdev1",
00:08:47.448        "uuid": "061d1442-e7ef-40b6-99c7-e1abd5e982b9",
00:08:47.448        "is_configured": true,
00:08:47.448        "data_offset": 0,
00:08:47.448        "data_size": 65536
00:08:47.448      },
00:08:47.448      {
00:08:47.448        "name": "BaseBdev2",
00:08:47.448        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:47.448        "is_configured": false,
00:08:47.448        "data_offset": 0,
00:08:47.448        "data_size": 0
00:08:47.448      }
00:08:47.448    ]
00:08:47.448  }'
00:08:47.448   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:47.448   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.708  [2024-12-16 11:30:13.689058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:08:47.708  [2024-12-16 11:30:13.689216] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:47.708  [2024-12-16 11:30:13.689256] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:08:47.708  [2024-12-16 11:30:13.689707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:47.708  [2024-12-16 11:30:13.689959] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:47.708  [2024-12-16 11:30:13.690044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:08:47.708  [2024-12-16 11:30:13.690370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:47.708  BaseBdev2
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.708  [
00:08:47.708  {
00:08:47.708  "name": "BaseBdev2",
00:08:47.708  "aliases": [
00:08:47.708  "07b06695-be81-4598-abae-f396ae72a1e2"
00:08:47.708  ],
00:08:47.708  "product_name": "Malloc disk",
00:08:47.708  "block_size": 512,
00:08:47.708  "num_blocks": 65536,
00:08:47.708  "uuid": "07b06695-be81-4598-abae-f396ae72a1e2",
00:08:47.708  "assigned_rate_limits": {
00:08:47.708  "rw_ios_per_sec": 0,
00:08:47.708  "rw_mbytes_per_sec": 0,
00:08:47.708  "r_mbytes_per_sec": 0,
00:08:47.708  "w_mbytes_per_sec": 0
00:08:47.708  },
00:08:47.708  "claimed": true,
00:08:47.708  "claim_type": "exclusive_write",
00:08:47.708  "zoned": false,
00:08:47.708  "supported_io_types": {
00:08:47.708  "read": true,
00:08:47.708  "write": true,
00:08:47.708  "unmap": true,
00:08:47.708  "flush": true,
00:08:47.708  "reset": true,
00:08:47.708  "nvme_admin": false,
00:08:47.708  "nvme_io": false,
00:08:47.708  "nvme_io_md": false,
00:08:47.708  "write_zeroes": true,
00:08:47.708  "zcopy": true,
00:08:47.708  "get_zone_info": false,
00:08:47.708  "zone_management": false,
00:08:47.708  "zone_append": false,
00:08:47.708  "compare": false,
00:08:47.708  "compare_and_write": false,
00:08:47.708  "abort": true,
00:08:47.708  "seek_hole": false,
00:08:47.708  "seek_data": false,
00:08:47.708  "copy": true,
00:08:47.708  "nvme_iov_md": false
00:08:47.708  },
00:08:47.708  "memory_domains": [
00:08:47.708  {
00:08:47.708  "dma_device_id": "system",
00:08:47.708  "dma_device_type": 1
00:08:47.708  },
00:08:47.708  {
00:08:47.708  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:47.708  "dma_device_type": 2
00:08:47.708  }
00:08:47.708  ],
00:08:47.708  "driver_specific": {}
00:08:47.708  }
00:08:47.708  ]
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:47.708   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:47.708    11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:47.708    11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:47.708    11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:47.708    11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:47.709    11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:47.967   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:47.967    "name": "Existed_Raid",
00:08:47.967    "uuid": "e1d4ac77-2bd1-46e1-9345-a801feeef216",
00:08:47.967    "strip_size_kb": 64,
00:08:47.967    "state": "online",
00:08:47.967    "raid_level": "concat",
00:08:47.967    "superblock": false,
00:08:47.967    "num_base_bdevs": 2,
00:08:47.967    "num_base_bdevs_discovered": 2,
00:08:47.967    "num_base_bdevs_operational": 2,
00:08:47.967    "base_bdevs_list": [
00:08:47.967      {
00:08:47.967        "name": "BaseBdev1",
00:08:47.967        "uuid": "061d1442-e7ef-40b6-99c7-e1abd5e982b9",
00:08:47.967        "is_configured": true,
00:08:47.967        "data_offset": 0,
00:08:47.967        "data_size": 65536
00:08:47.967      },
00:08:47.967      {
00:08:47.967        "name": "BaseBdev2",
00:08:47.967        "uuid": "07b06695-be81-4598-abae-f396ae72a1e2",
00:08:47.967        "is_configured": true,
00:08:47.967        "data_offset": 0,
00:08:47.967        "data_size": 65536
00:08:47.967      }
00:08:47.967    ]
00:08:47.967  }'
00:08:47.967   11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:47.967   11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:48.227   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:08:48.227   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:08:48.227   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:48.227   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:48.227   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:08:48.227   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:48.227    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:08:48.227    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:48.227    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.227    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:48.227  [2024-12-16 11:30:14.224766] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:48.227    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.227   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:48.227    "name": "Existed_Raid",
00:08:48.227    "aliases": [
00:08:48.227      "e1d4ac77-2bd1-46e1-9345-a801feeef216"
00:08:48.227    ],
00:08:48.227    "product_name": "Raid Volume",
00:08:48.227    "block_size": 512,
00:08:48.227    "num_blocks": 131072,
00:08:48.227    "uuid": "e1d4ac77-2bd1-46e1-9345-a801feeef216",
00:08:48.227    "assigned_rate_limits": {
00:08:48.227      "rw_ios_per_sec": 0,
00:08:48.227      "rw_mbytes_per_sec": 0,
00:08:48.227      "r_mbytes_per_sec": 0,
00:08:48.227      "w_mbytes_per_sec": 0
00:08:48.227    },
00:08:48.227    "claimed": false,
00:08:48.227    "zoned": false,
00:08:48.227    "supported_io_types": {
00:08:48.227      "read": true,
00:08:48.227      "write": true,
00:08:48.227      "unmap": true,
00:08:48.227      "flush": true,
00:08:48.227      "reset": true,
00:08:48.227      "nvme_admin": false,
00:08:48.227      "nvme_io": false,
00:08:48.227      "nvme_io_md": false,
00:08:48.227      "write_zeroes": true,
00:08:48.227      "zcopy": false,
00:08:48.227      "get_zone_info": false,
00:08:48.227      "zone_management": false,
00:08:48.227      "zone_append": false,
00:08:48.227      "compare": false,
00:08:48.227      "compare_and_write": false,
00:08:48.227      "abort": false,
00:08:48.227      "seek_hole": false,
00:08:48.227      "seek_data": false,
00:08:48.227      "copy": false,
00:08:48.227      "nvme_iov_md": false
00:08:48.227    },
00:08:48.227    "memory_domains": [
00:08:48.227      {
00:08:48.227        "dma_device_id": "system",
00:08:48.227        "dma_device_type": 1
00:08:48.227      },
00:08:48.227      {
00:08:48.227        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:48.227        "dma_device_type": 2
00:08:48.227      },
00:08:48.227      {
00:08:48.227        "dma_device_id": "system",
00:08:48.227        "dma_device_type": 1
00:08:48.227      },
00:08:48.227      {
00:08:48.227        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:48.227        "dma_device_type": 2
00:08:48.227      }
00:08:48.227    ],
00:08:48.227    "driver_specific": {
00:08:48.227      "raid": {
00:08:48.227        "uuid": "e1d4ac77-2bd1-46e1-9345-a801feeef216",
00:08:48.227        "strip_size_kb": 64,
00:08:48.227        "state": "online",
00:08:48.227        "raid_level": "concat",
00:08:48.227        "superblock": false,
00:08:48.227        "num_base_bdevs": 2,
00:08:48.227        "num_base_bdevs_discovered": 2,
00:08:48.227        "num_base_bdevs_operational": 2,
00:08:48.227        "base_bdevs_list": [
00:08:48.227          {
00:08:48.227            "name": "BaseBdev1",
00:08:48.227            "uuid": "061d1442-e7ef-40b6-99c7-e1abd5e982b9",
00:08:48.227            "is_configured": true,
00:08:48.227            "data_offset": 0,
00:08:48.227            "data_size": 65536
00:08:48.227          },
00:08:48.227          {
00:08:48.227            "name": "BaseBdev2",
00:08:48.227            "uuid": "07b06695-be81-4598-abae-f396ae72a1e2",
00:08:48.227            "is_configured": true,
00:08:48.227            "data_offset": 0,
00:08:48.227            "data_size": 65536
00:08:48.227          }
00:08:48.227        ]
00:08:48.227      }
00:08:48.227    }
00:08:48.227  }'
00:08:48.227    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:08:48.488  BaseBdev2'
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:48.488  [2024-12-16 11:30:14.440112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:08:48.488  [2024-12-16 11:30:14.440147] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:48.488  [2024-12-16 11:30:14.440227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:48.488    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:48.488    "name": "Existed_Raid",
00:08:48.488    "uuid": "e1d4ac77-2bd1-46e1-9345-a801feeef216",
00:08:48.488    "strip_size_kb": 64,
00:08:48.488    "state": "offline",
00:08:48.488    "raid_level": "concat",
00:08:48.488    "superblock": false,
00:08:48.488    "num_base_bdevs": 2,
00:08:48.488    "num_base_bdevs_discovered": 1,
00:08:48.488    "num_base_bdevs_operational": 1,
00:08:48.488    "base_bdevs_list": [
00:08:48.488      {
00:08:48.488        "name": null,
00:08:48.488        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:48.488        "is_configured": false,
00:08:48.488        "data_offset": 0,
00:08:48.488        "data_size": 65536
00:08:48.488      },
00:08:48.488      {
00:08:48.488        "name": "BaseBdev2",
00:08:48.488        "uuid": "07b06695-be81-4598-abae-f396ae72a1e2",
00:08:48.488        "is_configured": true,
00:08:48.488        "data_offset": 0,
00:08:48.488        "data_size": 65536
00:08:48.488      }
00:08:48.488    ]
00:08:48.488  }'
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:48.488   11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:49.057   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:08:49.057   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:49.057    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:49.057    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.057    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:49.057    11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:08:49.057    11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.057   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:08:49.057   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:08:49.057   11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:08:49.057   11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.057   11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:49.057  [2024-12-16 11:30:14.999259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:08:49.057  [2024-12-16 11:30:14.999332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:49.057    11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:49.057    11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.057    11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:49.057    11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:08:49.057    11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73293
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73293 ']'
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73293
00:08:49.057    11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:49.057    11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73293
00:08:49.057  killing process with pid 73293
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73293'
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73293
00:08:49.057  [2024-12-16 11:30:15.095264] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:49.057   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73293
00:08:49.057  [2024-12-16 11:30:15.096313] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:49.317   11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:08:49.317  
00:08:49.317  real	0m4.216s
00:08:49.317  user	0m6.674s
00:08:49.317  sys	0m0.836s
00:08:49.317   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:49.317   11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:08:49.317  ************************************
00:08:49.317  END TEST raid_state_function_test
00:08:49.317  ************************************
00:08:49.576   11:30:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true
00:08:49.576   11:30:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:08:49.576   11:30:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:49.576   11:30:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:49.576  ************************************
00:08:49.576  START TEST raid_state_function_test_sb
00:08:49.576  ************************************
00:08:49.576   11:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true
00:08:49.576   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat
00:08:49.576   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:08:49.576   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:08:49.576   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:08:49.577    11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']'
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:08:49.577  Process raid pid: 73535
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73535
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73535'
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73535
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73535 ']'
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:49.577  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:49.577   11:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:49.577  [2024-12-16 11:30:15.534199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:49.577  [2024-12-16 11:30:15.534439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:49.836  [2024-12-16 11:30:15.699745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:49.836  [2024-12-16 11:30:15.751737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:49.836  [2024-12-16 11:30:15.797240] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:49.836  [2024-12-16 11:30:15.797369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:50.406  [2024-12-16 11:30:16.444275] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:50.406  [2024-12-16 11:30:16.444385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:50.406  [2024-12-16 11:30:16.444432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:50.406  [2024-12-16 11:30:16.444467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:50.406   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:50.406    11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:50.406    11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:50.406    11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:50.406    11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:50.666    11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:50.666   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:50.666    "name": "Existed_Raid",
00:08:50.666    "uuid": "6ca85f04-2986-4403-8c06-6aa172945b8a",
00:08:50.666    "strip_size_kb": 64,
00:08:50.666    "state": "configuring",
00:08:50.666    "raid_level": "concat",
00:08:50.666    "superblock": true,
00:08:50.666    "num_base_bdevs": 2,
00:08:50.666    "num_base_bdevs_discovered": 0,
00:08:50.666    "num_base_bdevs_operational": 2,
00:08:50.666    "base_bdevs_list": [
00:08:50.666      {
00:08:50.666        "name": "BaseBdev1",
00:08:50.666        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:50.666        "is_configured": false,
00:08:50.666        "data_offset": 0,
00:08:50.666        "data_size": 0
00:08:50.666      },
00:08:50.666      {
00:08:50.666        "name": "BaseBdev2",
00:08:50.666        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:50.666        "is_configured": false,
00:08:50.666        "data_offset": 0,
00:08:50.666        "data_size": 0
00:08:50.666      }
00:08:50.666    ]
00:08:50.666  }'
00:08:50.666   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:50.666   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:50.925  [2024-12-16 11:30:16.951515] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:50.925  [2024-12-16 11:30:16.951585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:50.925  [2024-12-16 11:30:16.963561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:08:50.925  [2024-12-16 11:30:16.963657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:08:50.925  [2024-12-16 11:30:16.963673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:50.925  [2024-12-16 11:30:16.963685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:50.925  [2024-12-16 11:30:16.985185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:50.925  BaseBdev1
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:50.925   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.184   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.184   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:08:51.184   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.184   11:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.184  [
00:08:51.184  {
00:08:51.184  "name": "BaseBdev1",
00:08:51.184  "aliases": [
00:08:51.184  "a4b19bcb-5367-427d-bb05-6a4de8cdbf5d"
00:08:51.184  ],
00:08:51.184  "product_name": "Malloc disk",
00:08:51.184  "block_size": 512,
00:08:51.184  "num_blocks": 65536,
00:08:51.184  "uuid": "a4b19bcb-5367-427d-bb05-6a4de8cdbf5d",
00:08:51.184  "assigned_rate_limits": {
00:08:51.184  "rw_ios_per_sec": 0,
00:08:51.184  "rw_mbytes_per_sec": 0,
00:08:51.184  "r_mbytes_per_sec": 0,
00:08:51.184  "w_mbytes_per_sec": 0
00:08:51.184  },
00:08:51.184  "claimed": true,
00:08:51.184  "claim_type": "exclusive_write",
00:08:51.184  "zoned": false,
00:08:51.184  "supported_io_types": {
00:08:51.184  "read": true,
00:08:51.184  "write": true,
00:08:51.184  "unmap": true,
00:08:51.184  "flush": true,
00:08:51.184  "reset": true,
00:08:51.184  "nvme_admin": false,
00:08:51.184  "nvme_io": false,
00:08:51.184  "nvme_io_md": false,
00:08:51.184  "write_zeroes": true,
00:08:51.184  "zcopy": true,
00:08:51.184  "get_zone_info": false,
00:08:51.184  "zone_management": false,
00:08:51.184  "zone_append": false,
00:08:51.184  "compare": false,
00:08:51.184  "compare_and_write": false,
00:08:51.184  "abort": true,
00:08:51.184  "seek_hole": false,
00:08:51.184  "seek_data": false,
00:08:51.184  "copy": true,
00:08:51.184  "nvme_iov_md": false
00:08:51.184  },
00:08:51.184  "memory_domains": [
00:08:51.184  {
00:08:51.184  "dma_device_id": "system",
00:08:51.184  "dma_device_type": 1
00:08:51.184  },
00:08:51.184  {
00:08:51.184  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.184  "dma_device_type": 2
00:08:51.184  }
00:08:51.184  ],
00:08:51.184  "driver_specific": {}
00:08:51.184  }
00:08:51.184  ]
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:51.184    11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:51.184    11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.184    11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.184    11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:51.184    11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.184   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:51.184    "name": "Existed_Raid",
00:08:51.184    "uuid": "8e2bf0cf-f86f-4720-b5db-4cbcdd074f8b",
00:08:51.184    "strip_size_kb": 64,
00:08:51.184    "state": "configuring",
00:08:51.184    "raid_level": "concat",
00:08:51.184    "superblock": true,
00:08:51.184    "num_base_bdevs": 2,
00:08:51.184    "num_base_bdevs_discovered": 1,
00:08:51.184    "num_base_bdevs_operational": 2,
00:08:51.184    "base_bdevs_list": [
00:08:51.184      {
00:08:51.184        "name": "BaseBdev1",
00:08:51.184        "uuid": "a4b19bcb-5367-427d-bb05-6a4de8cdbf5d",
00:08:51.184        "is_configured": true,
00:08:51.185        "data_offset": 2048,
00:08:51.185        "data_size": 63488
00:08:51.185      },
00:08:51.185      {
00:08:51.185        "name": "BaseBdev2",
00:08:51.185        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:51.185        "is_configured": false,
00:08:51.185        "data_offset": 0,
00:08:51.185        "data_size": 0
00:08:51.185      }
00:08:51.185    ]
00:08:51.185  }'
00:08:51.185   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:51.185   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.444   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:08:51.444   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.444   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.444  [2024-12-16 11:30:17.492631] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:08:51.444  [2024-12-16 11:30:17.492790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:08:51.444   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.444   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:08:51.444   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.444   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.444  [2024-12-16 11:30:17.504656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:51.444  [2024-12-16 11:30:17.506835] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:08:51.444  [2024-12-16 11:30:17.506883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:51.704    11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:51.704    11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:51.704    11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.704    11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.704    11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:51.704    "name": "Existed_Raid",
00:08:51.704    "uuid": "721b8a11-da77-4d57-a3e9-83e857eade23",
00:08:51.704    "strip_size_kb": 64,
00:08:51.704    "state": "configuring",
00:08:51.704    "raid_level": "concat",
00:08:51.704    "superblock": true,
00:08:51.704    "num_base_bdevs": 2,
00:08:51.704    "num_base_bdevs_discovered": 1,
00:08:51.704    "num_base_bdevs_operational": 2,
00:08:51.704    "base_bdevs_list": [
00:08:51.704      {
00:08:51.704        "name": "BaseBdev1",
00:08:51.704        "uuid": "a4b19bcb-5367-427d-bb05-6a4de8cdbf5d",
00:08:51.704        "is_configured": true,
00:08:51.704        "data_offset": 2048,
00:08:51.704        "data_size": 63488
00:08:51.704      },
00:08:51.704      {
00:08:51.704        "name": "BaseBdev2",
00:08:51.704        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:51.704        "is_configured": false,
00:08:51.704        "data_offset": 0,
00:08:51.704        "data_size": 0
00:08:51.704      }
00:08:51.704    ]
00:08:51.704  }'
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:51.704   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.965  [2024-12-16 11:30:17.977256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:08:51.965  [2024-12-16 11:30:17.977612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:51.965  [2024-12-16 11:30:17.977684] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:51.965  [2024-12-16 11:30:17.978090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:51.965  BaseBdev2
00:08:51.965  [2024-12-16 11:30:17.978324] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:51.965  [2024-12-16 11:30:17.978393] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:08:51.965  [2024-12-16 11:30:17.978599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.965   11:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:51.965  [
00:08:51.965  {
00:08:51.965  "name": "BaseBdev2",
00:08:51.965  "aliases": [
00:08:51.965  "d04e770e-f0d1-4526-8bc1-58d21bed5cd6"
00:08:51.965  ],
00:08:51.965  "product_name": "Malloc disk",
00:08:51.965  "block_size": 512,
00:08:51.965  "num_blocks": 65536,
00:08:51.965  "uuid": "d04e770e-f0d1-4526-8bc1-58d21bed5cd6",
00:08:51.965  "assigned_rate_limits": {
00:08:51.965  "rw_ios_per_sec": 0,
00:08:51.965  "rw_mbytes_per_sec": 0,
00:08:51.965  "r_mbytes_per_sec": 0,
00:08:51.965  "w_mbytes_per_sec": 0
00:08:51.965  },
00:08:51.965  "claimed": true,
00:08:51.965  "claim_type": "exclusive_write",
00:08:51.965  "zoned": false,
00:08:51.965  "supported_io_types": {
00:08:51.965  "read": true,
00:08:51.965  "write": true,
00:08:51.965  "unmap": true,
00:08:51.965  "flush": true,
00:08:51.965  "reset": true,
00:08:51.965  "nvme_admin": false,
00:08:51.965  "nvme_io": false,
00:08:51.965  "nvme_io_md": false,
00:08:51.965  "write_zeroes": true,
00:08:51.965  "zcopy": true,
00:08:51.965  "get_zone_info": false,
00:08:51.965  "zone_management": false,
00:08:51.965  "zone_append": false,
00:08:51.965  "compare": false,
00:08:51.965  "compare_and_write": false,
00:08:51.965  "abort": true,
00:08:51.965  "seek_hole": false,
00:08:51.965  "seek_data": false,
00:08:51.965  "copy": true,
00:08:51.965  "nvme_iov_md": false
00:08:51.965  },
00:08:51.965  "memory_domains": [
00:08:51.965  {
00:08:51.965  "dma_device_id": "system",
00:08:51.965  "dma_device_type": 1
00:08:51.965  },
00:08:51.965  {
00:08:51.965  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:51.965  "dma_device_type": 2
00:08:51.965  }
00:08:51.965  ],
00:08:51.965  "driver_specific": {}
00:08:51.965  }
00:08:51.965  ]
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:51.965   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:51.965    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:51.965    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:51.966    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:51.966    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:52.225    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:52.225   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:52.225    "name": "Existed_Raid",
00:08:52.225    "uuid": "721b8a11-da77-4d57-a3e9-83e857eade23",
00:08:52.225    "strip_size_kb": 64,
00:08:52.225    "state": "online",
00:08:52.225    "raid_level": "concat",
00:08:52.225    "superblock": true,
00:08:52.225    "num_base_bdevs": 2,
00:08:52.225    "num_base_bdevs_discovered": 2,
00:08:52.225    "num_base_bdevs_operational": 2,
00:08:52.225    "base_bdevs_list": [
00:08:52.225      {
00:08:52.225        "name": "BaseBdev1",
00:08:52.225        "uuid": "a4b19bcb-5367-427d-bb05-6a4de8cdbf5d",
00:08:52.225        "is_configured": true,
00:08:52.225        "data_offset": 2048,
00:08:52.225        "data_size": 63488
00:08:52.225      },
00:08:52.225      {
00:08:52.225        "name": "BaseBdev2",
00:08:52.225        "uuid": "d04e770e-f0d1-4526-8bc1-58d21bed5cd6",
00:08:52.225        "is_configured": true,
00:08:52.225        "data_offset": 2048,
00:08:52.225        "data_size": 63488
00:08:52.225      }
00:08:52.225    ]
00:08:52.225  }'
00:08:52.225   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:52.225   11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:52.485   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:08:52.485   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:08:52.485   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:52.485   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:52.485   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:08:52.485   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:52.485    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:08:52.485    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:52.485    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:52.485    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:52.485  [2024-12-16 11:30:18.492880] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:52.485    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:52.485   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:52.485    "name": "Existed_Raid",
00:08:52.485    "aliases": [
00:08:52.485      "721b8a11-da77-4d57-a3e9-83e857eade23"
00:08:52.485    ],
00:08:52.485    "product_name": "Raid Volume",
00:08:52.485    "block_size": 512,
00:08:52.485    "num_blocks": 126976,
00:08:52.485    "uuid": "721b8a11-da77-4d57-a3e9-83e857eade23",
00:08:52.485    "assigned_rate_limits": {
00:08:52.485      "rw_ios_per_sec": 0,
00:08:52.485      "rw_mbytes_per_sec": 0,
00:08:52.485      "r_mbytes_per_sec": 0,
00:08:52.485      "w_mbytes_per_sec": 0
00:08:52.485    },
00:08:52.485    "claimed": false,
00:08:52.485    "zoned": false,
00:08:52.485    "supported_io_types": {
00:08:52.485      "read": true,
00:08:52.485      "write": true,
00:08:52.485      "unmap": true,
00:08:52.485      "flush": true,
00:08:52.485      "reset": true,
00:08:52.485      "nvme_admin": false,
00:08:52.485      "nvme_io": false,
00:08:52.485      "nvme_io_md": false,
00:08:52.485      "write_zeroes": true,
00:08:52.485      "zcopy": false,
00:08:52.485      "get_zone_info": false,
00:08:52.485      "zone_management": false,
00:08:52.485      "zone_append": false,
00:08:52.485      "compare": false,
00:08:52.485      "compare_and_write": false,
00:08:52.485      "abort": false,
00:08:52.485      "seek_hole": false,
00:08:52.485      "seek_data": false,
00:08:52.485      "copy": false,
00:08:52.485      "nvme_iov_md": false
00:08:52.485    },
00:08:52.485    "memory_domains": [
00:08:52.485      {
00:08:52.485        "dma_device_id": "system",
00:08:52.485        "dma_device_type": 1
00:08:52.485      },
00:08:52.485      {
00:08:52.485        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:52.485        "dma_device_type": 2
00:08:52.485      },
00:08:52.485      {
00:08:52.485        "dma_device_id": "system",
00:08:52.485        "dma_device_type": 1
00:08:52.485      },
00:08:52.485      {
00:08:52.485        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:52.485        "dma_device_type": 2
00:08:52.485      }
00:08:52.485    ],
00:08:52.485    "driver_specific": {
00:08:52.485      "raid": {
00:08:52.485        "uuid": "721b8a11-da77-4d57-a3e9-83e857eade23",
00:08:52.485        "strip_size_kb": 64,
00:08:52.485        "state": "online",
00:08:52.485        "raid_level": "concat",
00:08:52.485        "superblock": true,
00:08:52.485        "num_base_bdevs": 2,
00:08:52.485        "num_base_bdevs_discovered": 2,
00:08:52.485        "num_base_bdevs_operational": 2,
00:08:52.485        "base_bdevs_list": [
00:08:52.485          {
00:08:52.485            "name": "BaseBdev1",
00:08:52.485            "uuid": "a4b19bcb-5367-427d-bb05-6a4de8cdbf5d",
00:08:52.485            "is_configured": true,
00:08:52.485            "data_offset": 2048,
00:08:52.485            "data_size": 63488
00:08:52.485          },
00:08:52.485          {
00:08:52.485            "name": "BaseBdev2",
00:08:52.485            "uuid": "d04e770e-f0d1-4526-8bc1-58d21bed5cd6",
00:08:52.485            "is_configured": true,
00:08:52.485            "data_offset": 2048,
00:08:52.485            "data_size": 63488
00:08:52.485          }
00:08:52.485        ]
00:08:52.485      }
00:08:52.485    }
00:08:52.485  }'
00:08:52.485    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:52.745   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:08:52.745  BaseBdev2'
00:08:52.745    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:52.745   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:52.745   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:52.746  [2024-12-16 11:30:18.724176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:08:52.746  [2024-12-16 11:30:18.724262] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:52.746  [2024-12-16 11:30:18.724347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:52.746    11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:52.746    "name": "Existed_Raid",
00:08:52.746    "uuid": "721b8a11-da77-4d57-a3e9-83e857eade23",
00:08:52.746    "strip_size_kb": 64,
00:08:52.746    "state": "offline",
00:08:52.746    "raid_level": "concat",
00:08:52.746    "superblock": true,
00:08:52.746    "num_base_bdevs": 2,
00:08:52.746    "num_base_bdevs_discovered": 1,
00:08:52.746    "num_base_bdevs_operational": 1,
00:08:52.746    "base_bdevs_list": [
00:08:52.746      {
00:08:52.746        "name": null,
00:08:52.746        "uuid": "00000000-0000-0000-0000-000000000000",
00:08:52.746        "is_configured": false,
00:08:52.746        "data_offset": 0,
00:08:52.746        "data_size": 63488
00:08:52.746      },
00:08:52.746      {
00:08:52.746        "name": "BaseBdev2",
00:08:52.746        "uuid": "d04e770e-f0d1-4526-8bc1-58d21bed5cd6",
00:08:52.746        "is_configured": true,
00:08:52.746        "data_offset": 2048,
00:08:52.746        "data_size": 63488
00:08:52.746      }
00:08:52.746    ]
00:08:52.746  }'
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:52.746   11:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:53.316  [2024-12-16 11:30:19.243468] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:08:53.316  [2024-12-16 11:30:19.243529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73535
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73535 ']'
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73535
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:53.316    11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73535
00:08:53.316  killing process with pid 73535
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73535'
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73535
00:08:53.316  [2024-12-16 11:30:19.358477] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:53.316   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73535
00:08:53.316  [2024-12-16 11:30:19.359555] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:53.576  ************************************
00:08:53.576  END TEST raid_state_function_test_sb
00:08:53.576  ************************************
00:08:53.576   11:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:08:53.576  
00:08:53.576  real	0m4.181s
00:08:53.576  user	0m6.630s
00:08:53.576  sys	0m0.840s
00:08:53.576   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:53.576   11:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:08:53.834   11:30:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2
00:08:53.834   11:30:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:08:53.834   11:30:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:53.834   11:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:53.834  ************************************
00:08:53.834  START TEST raid_superblock_test
00:08:53.834  ************************************
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']'
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73776
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73776
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73776 ']'
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:53.834  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:53.834   11:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:53.835   11:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:53.835  [2024-12-16 11:30:19.778108] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:53.835  [2024-12-16 11:30:19.778380] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73776 ]
00:08:54.092  [2024-12-16 11:30:19.943690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:54.092  [2024-12-16 11:30:19.994773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:54.092  [2024-12-16 11:30:20.040435] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:54.092  [2024-12-16 11:30:20.040582] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:54.660   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:54.660   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:54.661  malloc1
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:54.661  [2024-12-16 11:30:20.717470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:08:54.661  [2024-12-16 11:30:20.717648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:54.661  [2024-12-16 11:30:20.717687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:08:54.661  [2024-12-16 11:30:20.717707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:54.661  [2024-12-16 11:30:20.720169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:54.661  [2024-12-16 11:30:20.720217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:08:54.661  pt1
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:54.661   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:54.920  malloc2
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:54.920  [2024-12-16 11:30:20.756503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:08:54.920  [2024-12-16 11:30:20.756652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:54.920  [2024-12-16 11:30:20.756687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:08:54.920  [2024-12-16 11:30:20.756704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:54.920  [2024-12-16 11:30:20.759679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:54.920  [2024-12-16 11:30:20.759787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:08:54.920  pt2
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:54.920  [2024-12-16 11:30:20.768620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:08:54.920  [2024-12-16 11:30:20.770760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:08:54.920  [2024-12-16 11:30:20.770968] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:08:54.920  [2024-12-16 11:30:20.770991] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:54.920  [2024-12-16 11:30:20.771313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:54.920  [2024-12-16 11:30:20.771488] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:08:54.920  [2024-12-16 11:30:20.771505] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:08:54.920  [2024-12-16 11:30:20.771722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:54.920   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:54.921   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:54.921   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:54.921    11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:54.921    11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:54.921    11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:54.921    11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:54.921    11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:54.921   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:54.921    "name": "raid_bdev1",
00:08:54.921    "uuid": "3acc53b8-04ed-4b54-979a-e6936a3634b9",
00:08:54.921    "strip_size_kb": 64,
00:08:54.921    "state": "online",
00:08:54.921    "raid_level": "concat",
00:08:54.921    "superblock": true,
00:08:54.921    "num_base_bdevs": 2,
00:08:54.921    "num_base_bdevs_discovered": 2,
00:08:54.921    "num_base_bdevs_operational": 2,
00:08:54.921    "base_bdevs_list": [
00:08:54.921      {
00:08:54.921        "name": "pt1",
00:08:54.921        "uuid": "00000000-0000-0000-0000-000000000001",
00:08:54.921        "is_configured": true,
00:08:54.921        "data_offset": 2048,
00:08:54.921        "data_size": 63488
00:08:54.921      },
00:08:54.921      {
00:08:54.921        "name": "pt2",
00:08:54.921        "uuid": "00000000-0000-0000-0000-000000000002",
00:08:54.921        "is_configured": true,
00:08:54.921        "data_offset": 2048,
00:08:54.921        "data_size": 63488
00:08:54.921      }
00:08:54.921    ]
00:08:54.921  }'
00:08:54.921   11:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:54.921   11:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.489  [2024-12-16 11:30:21.272101] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:55.489    "name": "raid_bdev1",
00:08:55.489    "aliases": [
00:08:55.489      "3acc53b8-04ed-4b54-979a-e6936a3634b9"
00:08:55.489    ],
00:08:55.489    "product_name": "Raid Volume",
00:08:55.489    "block_size": 512,
00:08:55.489    "num_blocks": 126976,
00:08:55.489    "uuid": "3acc53b8-04ed-4b54-979a-e6936a3634b9",
00:08:55.489    "assigned_rate_limits": {
00:08:55.489      "rw_ios_per_sec": 0,
00:08:55.489      "rw_mbytes_per_sec": 0,
00:08:55.489      "r_mbytes_per_sec": 0,
00:08:55.489      "w_mbytes_per_sec": 0
00:08:55.489    },
00:08:55.489    "claimed": false,
00:08:55.489    "zoned": false,
00:08:55.489    "supported_io_types": {
00:08:55.489      "read": true,
00:08:55.489      "write": true,
00:08:55.489      "unmap": true,
00:08:55.489      "flush": true,
00:08:55.489      "reset": true,
00:08:55.489      "nvme_admin": false,
00:08:55.489      "nvme_io": false,
00:08:55.489      "nvme_io_md": false,
00:08:55.489      "write_zeroes": true,
00:08:55.489      "zcopy": false,
00:08:55.489      "get_zone_info": false,
00:08:55.489      "zone_management": false,
00:08:55.489      "zone_append": false,
00:08:55.489      "compare": false,
00:08:55.489      "compare_and_write": false,
00:08:55.489      "abort": false,
00:08:55.489      "seek_hole": false,
00:08:55.489      "seek_data": false,
00:08:55.489      "copy": false,
00:08:55.489      "nvme_iov_md": false
00:08:55.489    },
00:08:55.489    "memory_domains": [
00:08:55.489      {
00:08:55.489        "dma_device_id": "system",
00:08:55.489        "dma_device_type": 1
00:08:55.489      },
00:08:55.489      {
00:08:55.489        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:55.489        "dma_device_type": 2
00:08:55.489      },
00:08:55.489      {
00:08:55.489        "dma_device_id": "system",
00:08:55.489        "dma_device_type": 1
00:08:55.489      },
00:08:55.489      {
00:08:55.489        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:55.489        "dma_device_type": 2
00:08:55.489      }
00:08:55.489    ],
00:08:55.489    "driver_specific": {
00:08:55.489      "raid": {
00:08:55.489        "uuid": "3acc53b8-04ed-4b54-979a-e6936a3634b9",
00:08:55.489        "strip_size_kb": 64,
00:08:55.489        "state": "online",
00:08:55.489        "raid_level": "concat",
00:08:55.489        "superblock": true,
00:08:55.489        "num_base_bdevs": 2,
00:08:55.489        "num_base_bdevs_discovered": 2,
00:08:55.489        "num_base_bdevs_operational": 2,
00:08:55.489        "base_bdevs_list": [
00:08:55.489          {
00:08:55.489            "name": "pt1",
00:08:55.489            "uuid": "00000000-0000-0000-0000-000000000001",
00:08:55.489            "is_configured": true,
00:08:55.489            "data_offset": 2048,
00:08:55.489            "data_size": 63488
00:08:55.489          },
00:08:55.489          {
00:08:55.489            "name": "pt2",
00:08:55.489            "uuid": "00000000-0000-0000-0000-000000000002",
00:08:55.489            "is_configured": true,
00:08:55.489            "data_offset": 2048,
00:08:55.489            "data_size": 63488
00:08:55.489          }
00:08:55.489        ]
00:08:55.489      }
00:08:55.489    }
00:08:55.489  }'
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:08:55.489  pt2'
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:55.489   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.489  [2024-12-16 11:30:21.523976] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:55.489    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3acc53b8-04ed-4b54-979a-e6936a3634b9
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3acc53b8-04ed-4b54-979a-e6936a3634b9 ']'
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749  [2024-12-16 11:30:21.571679] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:08:55.749  [2024-12-16 11:30:21.571778] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:08:55.749  [2024-12-16 11:30:21.571910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:55.749  [2024-12-16 11:30:21.572010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:55.749  [2024-12-16 11:30:21.572084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749  [2024-12-16 11:30:21.707739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:08:55.749  [2024-12-16 11:30:21.709921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:08:55.749  [2024-12-16 11:30:21.710011] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:08:55.749  [2024-12-16 11:30:21.710075] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:08:55.749  [2024-12-16 11:30:21.710095] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:08:55.749  [2024-12-16 11:30:21.710106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:08:55.749  request:
00:08:55.749  {
00:08:55.749  "name": "raid_bdev1",
00:08:55.749  "raid_level": "concat",
00:08:55.749  "base_bdevs": [
00:08:55.749  "malloc1",
00:08:55.749  "malloc2"
00:08:55.749  ],
00:08:55.749  "strip_size_kb": 64,
00:08:55.749  "superblock": false,
00:08:55.749  "method": "bdev_raid_create",
00:08:55.749  "req_id": 1
00:08:55.749  }
00:08:55.749  Got JSON-RPC error response
00:08:55.749  response:
00:08:55.749  {
00:08:55.749  "code": -17,
00:08:55.749  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:08:55.749  }
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:08:55.749    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.749   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.749  [2024-12-16 11:30:21.775715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:08:55.749  [2024-12-16 11:30:21.775845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:55.749  [2024-12-16 11:30:21.775898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:08:55.749  [2024-12-16 11:30:21.775961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:55.749  [2024-12-16 11:30:21.778498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:55.749  [2024-12-16 11:30:21.778603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:08:55.750  [2024-12-16 11:30:21.778734] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:08:55.750  [2024-12-16 11:30:21.778821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:08:55.750  pt1
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:55.750   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:55.750    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:55.750    11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:55.750    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:55.750    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:55.750    11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:56.008   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:56.008    "name": "raid_bdev1",
00:08:56.008    "uuid": "3acc53b8-04ed-4b54-979a-e6936a3634b9",
00:08:56.008    "strip_size_kb": 64,
00:08:56.008    "state": "configuring",
00:08:56.008    "raid_level": "concat",
00:08:56.008    "superblock": true,
00:08:56.008    "num_base_bdevs": 2,
00:08:56.008    "num_base_bdevs_discovered": 1,
00:08:56.008    "num_base_bdevs_operational": 2,
00:08:56.008    "base_bdevs_list": [
00:08:56.008      {
00:08:56.008        "name": "pt1",
00:08:56.008        "uuid": "00000000-0000-0000-0000-000000000001",
00:08:56.008        "is_configured": true,
00:08:56.008        "data_offset": 2048,
00:08:56.008        "data_size": 63488
00:08:56.008      },
00:08:56.008      {
00:08:56.008        "name": null,
00:08:56.008        "uuid": "00000000-0000-0000-0000-000000000002",
00:08:56.009        "is_configured": false,
00:08:56.009        "data_offset": 2048,
00:08:56.009        "data_size": 63488
00:08:56.009      }
00:08:56.009    ]
00:08:56.009  }'
00:08:56.009   11:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:56.009   11:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']'
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:56.326  [2024-12-16 11:30:22.247426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:08:56.326  [2024-12-16 11:30:22.247585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:56.326  [2024-12-16 11:30:22.247639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:08:56.326  [2024-12-16 11:30:22.247686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:56.326  [2024-12-16 11:30:22.248197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:56.326  [2024-12-16 11:30:22.248267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:08:56.326  [2024-12-16 11:30:22.248393] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:08:56.326  [2024-12-16 11:30:22.248451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:08:56.326  [2024-12-16 11:30:22.248612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:56.326  [2024-12-16 11:30:22.248661] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:56.326  [2024-12-16 11:30:22.248952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:08:56.326  [2024-12-16 11:30:22.249125] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:56.326  [2024-12-16 11:30:22.249179] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:08:56.326  [2024-12-16 11:30:22.249343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:56.326  pt2
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:56.326    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:56.326    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:56.326    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:56.326    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:56.326    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:56.326   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:56.326    "name": "raid_bdev1",
00:08:56.326    "uuid": "3acc53b8-04ed-4b54-979a-e6936a3634b9",
00:08:56.326    "strip_size_kb": 64,
00:08:56.326    "state": "online",
00:08:56.326    "raid_level": "concat",
00:08:56.326    "superblock": true,
00:08:56.326    "num_base_bdevs": 2,
00:08:56.326    "num_base_bdevs_discovered": 2,
00:08:56.326    "num_base_bdevs_operational": 2,
00:08:56.326    "base_bdevs_list": [
00:08:56.326      {
00:08:56.326        "name": "pt1",
00:08:56.326        "uuid": "00000000-0000-0000-0000-000000000001",
00:08:56.326        "is_configured": true,
00:08:56.326        "data_offset": 2048,
00:08:56.326        "data_size": 63488
00:08:56.326      },
00:08:56.326      {
00:08:56.326        "name": "pt2",
00:08:56.326        "uuid": "00000000-0000-0000-0000-000000000002",
00:08:56.326        "is_configured": true,
00:08:56.326        "data_offset": 2048,
00:08:56.326        "data_size": 63488
00:08:56.327      }
00:08:56.327    ]
00:08:56.327  }'
00:08:56.327   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:56.327   11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:56.916  [2024-12-16 11:30:22.771020] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:08:56.916    "name": "raid_bdev1",
00:08:56.916    "aliases": [
00:08:56.916      "3acc53b8-04ed-4b54-979a-e6936a3634b9"
00:08:56.916    ],
00:08:56.916    "product_name": "Raid Volume",
00:08:56.916    "block_size": 512,
00:08:56.916    "num_blocks": 126976,
00:08:56.916    "uuid": "3acc53b8-04ed-4b54-979a-e6936a3634b9",
00:08:56.916    "assigned_rate_limits": {
00:08:56.916      "rw_ios_per_sec": 0,
00:08:56.916      "rw_mbytes_per_sec": 0,
00:08:56.916      "r_mbytes_per_sec": 0,
00:08:56.916      "w_mbytes_per_sec": 0
00:08:56.916    },
00:08:56.916    "claimed": false,
00:08:56.916    "zoned": false,
00:08:56.916    "supported_io_types": {
00:08:56.916      "read": true,
00:08:56.916      "write": true,
00:08:56.916      "unmap": true,
00:08:56.916      "flush": true,
00:08:56.916      "reset": true,
00:08:56.916      "nvme_admin": false,
00:08:56.916      "nvme_io": false,
00:08:56.916      "nvme_io_md": false,
00:08:56.916      "write_zeroes": true,
00:08:56.916      "zcopy": false,
00:08:56.916      "get_zone_info": false,
00:08:56.916      "zone_management": false,
00:08:56.916      "zone_append": false,
00:08:56.916      "compare": false,
00:08:56.916      "compare_and_write": false,
00:08:56.916      "abort": false,
00:08:56.916      "seek_hole": false,
00:08:56.916      "seek_data": false,
00:08:56.916      "copy": false,
00:08:56.916      "nvme_iov_md": false
00:08:56.916    },
00:08:56.916    "memory_domains": [
00:08:56.916      {
00:08:56.916        "dma_device_id": "system",
00:08:56.916        "dma_device_type": 1
00:08:56.916      },
00:08:56.916      {
00:08:56.916        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:56.916        "dma_device_type": 2
00:08:56.916      },
00:08:56.916      {
00:08:56.916        "dma_device_id": "system",
00:08:56.916        "dma_device_type": 1
00:08:56.916      },
00:08:56.916      {
00:08:56.916        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:56.916        "dma_device_type": 2
00:08:56.916      }
00:08:56.916    ],
00:08:56.916    "driver_specific": {
00:08:56.916      "raid": {
00:08:56.916        "uuid": "3acc53b8-04ed-4b54-979a-e6936a3634b9",
00:08:56.916        "strip_size_kb": 64,
00:08:56.916        "state": "online",
00:08:56.916        "raid_level": "concat",
00:08:56.916        "superblock": true,
00:08:56.916        "num_base_bdevs": 2,
00:08:56.916        "num_base_bdevs_discovered": 2,
00:08:56.916        "num_base_bdevs_operational": 2,
00:08:56.916        "base_bdevs_list": [
00:08:56.916          {
00:08:56.916            "name": "pt1",
00:08:56.916            "uuid": "00000000-0000-0000-0000-000000000001",
00:08:56.916            "is_configured": true,
00:08:56.916            "data_offset": 2048,
00:08:56.916            "data_size": 63488
00:08:56.916          },
00:08:56.916          {
00:08:56.916            "name": "pt2",
00:08:56.916            "uuid": "00000000-0000-0000-0000-000000000002",
00:08:56.916            "is_configured": true,
00:08:56.916            "data_offset": 2048,
00:08:56.916            "data_size": 63488
00:08:56.916          }
00:08:56.916        ]
00:08:56.916      }
00:08:56.916    }
00:08:56.916  }'
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:08:56.916  pt2'
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:56.916    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:56.916   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:08:56.917    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:08:56.917    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:08:56.917    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:56.917    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:56.917    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:57.176   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:08:57.176   11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:08:57.176    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:08:57.176    11:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:08:57.176    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:57.176    11:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:57.176  [2024-12-16 11:30:23.006987] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:08:57.176    11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3acc53b8-04ed-4b54-979a-e6936a3634b9 '!=' 3acc53b8-04ed-4b54-979a-e6936a3634b9 ']'
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73776
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73776 ']'
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73776
00:08:57.176    11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:08:57.176    11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73776
00:08:57.176  killing process with pid 73776
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73776'
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73776
00:08:57.176  [2024-12-16 11:30:23.062897] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:08:57.176  [2024-12-16 11:30:23.063013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:08:57.176   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73776
00:08:57.176  [2024-12-16 11:30:23.063074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:08:57.176  [2024-12-16 11:30:23.063085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:08:57.176  [2024-12-16 11:30:23.087687] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:08:57.436  ************************************
00:08:57.436  END TEST raid_superblock_test
00:08:57.436  ************************************
00:08:57.436   11:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:08:57.436  
00:08:57.436  real	0m3.661s
00:08:57.436  user	0m5.724s
00:08:57.436  sys	0m0.766s
00:08:57.436   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:08:57.436   11:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:08:57.436   11:30:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read
00:08:57.436   11:30:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:08:57.436   11:30:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:08:57.436   11:30:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:08:57.436  ************************************
00:08:57.436  START TEST raid_read_error_test
00:08:57.436  ************************************
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:08:57.436    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']'
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:08:57.436   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:08:57.437    11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8CfVNrNOaO
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73977
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73977
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73977 ']'
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:57.437  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:08:57.437   11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:57.695  [2024-12-16 11:30:23.526517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:08:57.695  [2024-12-16 11:30:23.526690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73977 ]
00:08:57.695  [2024-12-16 11:30:23.693495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:57.695  [2024-12-16 11:30:23.747095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:08:57.954  [2024-12-16 11:30:23.792904] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:57.954  [2024-12-16 11:30:23.793030] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.524  BaseBdev1_malloc
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.524  true
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.524  [2024-12-16 11:30:24.461694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:08:58.524  [2024-12-16 11:30:24.461760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:58.524  [2024-12-16 11:30:24.461787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:08:58.524  [2024-12-16 11:30:24.461798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:58.524  [2024-12-16 11:30:24.464334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:58.524  [2024-12-16 11:30:24.464456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:08:58.524  BaseBdev1
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.524  BaseBdev2_malloc
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.524  true
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.524  [2024-12-16 11:30:24.514138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:08:58.524  [2024-12-16 11:30:24.514261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:58.524  [2024-12-16 11:30:24.514304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:08:58.524  [2024-12-16 11:30:24.514371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:58.524  [2024-12-16 11:30:24.516929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:58.524  [2024-12-16 11:30:24.517016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:08:58.524  BaseBdev2
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.524  [2024-12-16 11:30:24.526139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:08:58.524  [2024-12-16 11:30:24.528290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:08:58.524  [2024-12-16 11:30:24.528531] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:08:58.524  [2024-12-16 11:30:24.528601] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:08:58.524  [2024-12-16 11:30:24.528918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:08:58.524  [2024-12-16 11:30:24.529105] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:08:58.524  [2024-12-16 11:30:24.529155] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:08:58.524  [2024-12-16 11:30:24.529352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:08:58.524   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:08:58.525    11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:08:58.525    11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:08:58.525    11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:08:58.525    11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.525    11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:08:58.525    "name": "raid_bdev1",
00:08:58.525    "uuid": "d8977725-2830-4ae7-8722-4203def0f234",
00:08:58.525    "strip_size_kb": 64,
00:08:58.525    "state": "online",
00:08:58.525    "raid_level": "concat",
00:08:58.525    "superblock": true,
00:08:58.525    "num_base_bdevs": 2,
00:08:58.525    "num_base_bdevs_discovered": 2,
00:08:58.525    "num_base_bdevs_operational": 2,
00:08:58.525    "base_bdevs_list": [
00:08:58.525      {
00:08:58.525        "name": "BaseBdev1",
00:08:58.525        "uuid": "d9e27d2f-3d1e-5307-99eb-a3f546391337",
00:08:58.525        "is_configured": true,
00:08:58.525        "data_offset": 2048,
00:08:58.525        "data_size": 63488
00:08:58.525      },
00:08:58.525      {
00:08:58.525        "name": "BaseBdev2",
00:08:58.525        "uuid": "c5fab754-8253-5150-a102-dd7deffebdbf",
00:08:58.525        "is_configured": true,
00:08:58.525        "data_offset": 2048,
00:08:58.525        "data_size": 63488
00:08:58.525      }
00:08:58.525    ]
00:08:58.525  }'
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:08:58.525   11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:08:59.095   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:08:59.095   11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:08:59.095  [2024-12-16 11:30:25.089640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]]
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:00.033   11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:00.033    11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:00.033    11:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:00.033    11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:00.033    11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:00.033    11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:00.033   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:00.033    "name": "raid_bdev1",
00:09:00.033    "uuid": "d8977725-2830-4ae7-8722-4203def0f234",
00:09:00.033    "strip_size_kb": 64,
00:09:00.033    "state": "online",
00:09:00.033    "raid_level": "concat",
00:09:00.033    "superblock": true,
00:09:00.033    "num_base_bdevs": 2,
00:09:00.033    "num_base_bdevs_discovered": 2,
00:09:00.033    "num_base_bdevs_operational": 2,
00:09:00.033    "base_bdevs_list": [
00:09:00.033      {
00:09:00.033        "name": "BaseBdev1",
00:09:00.033        "uuid": "d9e27d2f-3d1e-5307-99eb-a3f546391337",
00:09:00.033        "is_configured": true,
00:09:00.033        "data_offset": 2048,
00:09:00.033        "data_size": 63488
00:09:00.033      },
00:09:00.033      {
00:09:00.033        "name": "BaseBdev2",
00:09:00.033        "uuid": "c5fab754-8253-5150-a102-dd7deffebdbf",
00:09:00.033        "is_configured": true,
00:09:00.033        "data_offset": 2048,
00:09:00.033        "data_size": 63488
00:09:00.033      }
00:09:00.033    ]
00:09:00.033  }'
00:09:00.033   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:00.033   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:00.603   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:00.603   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:00.603   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:00.603  [2024-12-16 11:30:26.483150] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:00.603  [2024-12-16 11:30:26.483249] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:00.603  [2024-12-16 11:30:26.486351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:00.603  [2024-12-16 11:30:26.486448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:00.603  [2024-12-16 11:30:26.486515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:00.603  [2024-12-16 11:30:26.486606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:09:00.603  {
00:09:00.603    "results": [
00:09:00.603      {
00:09:00.603        "job": "raid_bdev1",
00:09:00.603        "core_mask": "0x1",
00:09:00.603        "workload": "randrw",
00:09:00.603        "percentage": 50,
00:09:00.603        "status": "finished",
00:09:00.603        "queue_depth": 1,
00:09:00.603        "io_size": 131072,
00:09:00.603        "runtime": 1.39412,
00:09:00.603        "iops": 13871.115829340373,
00:09:00.603        "mibps": 1733.8894786675467,
00:09:00.603        "io_failed": 1,
00:09:00.604        "io_timeout": 0,
00:09:00.604        "avg_latency_us": 99.45677786205263,
00:09:00.604        "min_latency_us": 29.512663755458515,
00:09:00.604        "max_latency_us": 1752.8733624454148
00:09:00.604      }
00:09:00.604    ],
00:09:00.604    "core_count": 1
00:09:00.604  }
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73977
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73977 ']'
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73977
00:09:00.604    11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:00.604    11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73977
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73977'
00:09:00.604  killing process with pid 73977
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73977
00:09:00.604  [2024-12-16 11:30:26.540940] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:00.604   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73977
00:09:00.604  [2024-12-16 11:30:26.557550] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:00.919    11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8CfVNrNOaO
00:09:00.919    11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:09:00.919    11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:09:00.919   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72
00:09:00.919   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat
00:09:00.919   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:00.919   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:09:00.919   11:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]]
00:09:00.919  
00:09:00.919  real	0m3.407s
00:09:00.919  user	0m4.391s
00:09:00.919  sys	0m0.563s
00:09:00.919   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:00.919  ************************************
00:09:00.919  END TEST raid_read_error_test
00:09:00.919  ************************************
00:09:00.919   11:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:00.919   11:30:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write
00:09:00.919   11:30:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:00.919   11:30:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:00.920   11:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:00.920  ************************************
00:09:00.920  START TEST raid_write_error_test
00:09:00.920  ************************************
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']'
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:09:00.920    11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RfyaAVyWVF
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74110
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74110
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74110 ']'
00:09:00.920  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:00.920   11:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:01.206  [2024-12-16 11:30:26.997876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:01.206  [2024-12-16 11:30:26.998039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74110 ]
00:09:01.206  [2024-12-16 11:30:27.163453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:01.206  [2024-12-16 11:30:27.215426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:01.206  [2024-12-16 11:30:27.260854] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:01.206  [2024-12-16 11:30:27.260886] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146  BaseBdev1_malloc
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146  true
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146  [2024-12-16 11:30:27.972851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:09:02.146  [2024-12-16 11:30:27.972918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:02.146  [2024-12-16 11:30:27.972960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:09:02.146  [2024-12-16 11:30:27.972971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:02.146  [2024-12-16 11:30:27.975515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:02.146  [2024-12-16 11:30:27.975570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:09:02.146  BaseBdev1
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146   11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146  BaseBdev2_malloc
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146  true
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146  [2024-12-16 11:30:28.026102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:09:02.146  [2024-12-16 11:30:28.026160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:02.146  [2024-12-16 11:30:28.026183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:09:02.146  [2024-12-16 11:30:28.026194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:02.146  [2024-12-16 11:30:28.028660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:02.146  [2024-12-16 11:30:28.028760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:09:02.146  BaseBdev2
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146  [2024-12-16 11:30:28.038116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:02.146  [2024-12-16 11:30:28.040305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:02.146  [2024-12-16 11:30:28.040585] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:02.146  [2024-12-16 11:30:28.040605] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:09:02.146  [2024-12-16 11:30:28.040926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:09:02.146  [2024-12-16 11:30:28.041094] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:02.146  [2024-12-16 11:30:28.041110] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:09:02.146  [2024-12-16 11:30:28.041261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:02.146   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:02.146    11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:02.146    11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:02.146    11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:02.146    11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.146    11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:02.147   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:02.147    "name": "raid_bdev1",
00:09:02.147    "uuid": "acd91f6e-d48d-4f00-ad51-1e4a23608105",
00:09:02.147    "strip_size_kb": 64,
00:09:02.147    "state": "online",
00:09:02.147    "raid_level": "concat",
00:09:02.147    "superblock": true,
00:09:02.147    "num_base_bdevs": 2,
00:09:02.147    "num_base_bdevs_discovered": 2,
00:09:02.147    "num_base_bdevs_operational": 2,
00:09:02.147    "base_bdevs_list": [
00:09:02.147      {
00:09:02.147        "name": "BaseBdev1",
00:09:02.147        "uuid": "e5e115ec-5d98-51f3-bdbb-b939d44313fc",
00:09:02.147        "is_configured": true,
00:09:02.147        "data_offset": 2048,
00:09:02.147        "data_size": 63488
00:09:02.147      },
00:09:02.147      {
00:09:02.147        "name": "BaseBdev2",
00:09:02.147        "uuid": "30b3a787-8760-51b2-8f1f-782898128206",
00:09:02.147        "is_configured": true,
00:09:02.147        "data_offset": 2048,
00:09:02.147        "data_size": 63488
00:09:02.147      }
00:09:02.147    ]
00:09:02.147  }'
00:09:02.147   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:02.147   11:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:02.716   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:09:02.716   11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:09:02.716  [2024-12-16 11:30:28.641492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]]
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:03.694    11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:03.694    11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:03.694    11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:03.694    11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:03.694    11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:03.694    "name": "raid_bdev1",
00:09:03.694    "uuid": "acd91f6e-d48d-4f00-ad51-1e4a23608105",
00:09:03.694    "strip_size_kb": 64,
00:09:03.694    "state": "online",
00:09:03.694    "raid_level": "concat",
00:09:03.694    "superblock": true,
00:09:03.694    "num_base_bdevs": 2,
00:09:03.694    "num_base_bdevs_discovered": 2,
00:09:03.694    "num_base_bdevs_operational": 2,
00:09:03.694    "base_bdevs_list": [
00:09:03.694      {
00:09:03.694        "name": "BaseBdev1",
00:09:03.694        "uuid": "e5e115ec-5d98-51f3-bdbb-b939d44313fc",
00:09:03.694        "is_configured": true,
00:09:03.694        "data_offset": 2048,
00:09:03.694        "data_size": 63488
00:09:03.694      },
00:09:03.694      {
00:09:03.694        "name": "BaseBdev2",
00:09:03.694        "uuid": "30b3a787-8760-51b2-8f1f-782898128206",
00:09:03.694        "is_configured": true,
00:09:03.694        "data_offset": 2048,
00:09:03.694        "data_size": 63488
00:09:03.694      }
00:09:03.694    ]
00:09:03.694  }'
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:03.694   11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:04.264  [2024-12-16 11:30:30.067266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:04.264  [2024-12-16 11:30:30.067373] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:04.264  [2024-12-16 11:30:30.070486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:04.264  [2024-12-16 11:30:30.070605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:04.264  [2024-12-16 11:30:30.070671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:04.264  [2024-12-16 11:30:30.070743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:09:04.264  {
00:09:04.264    "results": [
00:09:04.264      {
00:09:04.264        "job": "raid_bdev1",
00:09:04.264        "core_mask": "0x1",
00:09:04.264        "workload": "randrw",
00:09:04.264        "percentage": 50,
00:09:04.264        "status": "finished",
00:09:04.264        "queue_depth": 1,
00:09:04.264        "io_size": 131072,
00:09:04.264        "runtime": 1.426402,
00:09:04.264        "iops": 13874.770226065302,
00:09:04.264        "mibps": 1734.3462782581628,
00:09:04.264        "io_failed": 1,
00:09:04.264        "io_timeout": 0,
00:09:04.264        "avg_latency_us": 99.46872169250157,
00:09:04.264        "min_latency_us": 28.17117903930131,
00:09:04.264        "max_latency_us": 1760.0279475982534
00:09:04.264      }
00:09:04.264    ],
00:09:04.264    "core_count": 1
00:09:04.264  }
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74110
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74110 ']'
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74110
00:09:04.264    11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:04.264    11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74110
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74110'
00:09:04.264  killing process with pid 74110
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74110
00:09:04.264   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74110
00:09:04.264  [2024-12-16 11:30:30.120064] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:04.264  [2024-12-16 11:30:30.136363] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:04.523    11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RfyaAVyWVF
00:09:04.523    11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:09:04.523    11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:09:04.523  ************************************
00:09:04.523  END TEST raid_write_error_test
00:09:04.523  ************************************
00:09:04.523   11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70
00:09:04.523   11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat
00:09:04.523   11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:04.523   11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:09:04.523   11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]]
00:09:04.523  
00:09:04.523  real	0m3.498s
00:09:04.523  user	0m4.580s
00:09:04.523  sys	0m0.567s
00:09:04.523   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:04.523   11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:04.523   11:30:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:09:04.523   11:30:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false
00:09:04.523   11:30:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:04.523   11:30:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:04.523   11:30:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:04.523  ************************************
00:09:04.523  START TEST raid_state_function_test
00:09:04.523  ************************************
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:04.523    11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74244
00:09:04.523  Process raid pid: 74244
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74244'
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74244
00:09:04.523  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74244 ']'
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:04.523   11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:04.523  [2024-12-16 11:30:30.549881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:04.523  [2024-12-16 11:30:30.550123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:04.783  [2024-12-16 11:30:30.716175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:04.783  [2024-12-16 11:30:30.768013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:04.783  [2024-12-16 11:30:30.813873] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:04.783  [2024-12-16 11:30:30.814002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.722  [2024-12-16 11:30:31.448806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:05.722  [2024-12-16 11:30:31.448908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:05.722  [2024-12-16 11:30:31.448932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:05.722  [2024-12-16 11:30:31.448949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:05.722    11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:05.722    11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.722    11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:05.722    11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.722    11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:05.722    "name": "Existed_Raid",
00:09:05.722    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:05.722    "strip_size_kb": 0,
00:09:05.722    "state": "configuring",
00:09:05.722    "raid_level": "raid1",
00:09:05.722    "superblock": false,
00:09:05.722    "num_base_bdevs": 2,
00:09:05.722    "num_base_bdevs_discovered": 0,
00:09:05.722    "num_base_bdevs_operational": 2,
00:09:05.722    "base_bdevs_list": [
00:09:05.722      {
00:09:05.722        "name": "BaseBdev1",
00:09:05.722        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:05.722        "is_configured": false,
00:09:05.722        "data_offset": 0,
00:09:05.722        "data_size": 0
00:09:05.722      },
00:09:05.722      {
00:09:05.722        "name": "BaseBdev2",
00:09:05.722        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:05.722        "is_configured": false,
00:09:05.722        "data_offset": 0,
00:09:05.722        "data_size": 0
00:09:05.722      }
00:09:05.722    ]
00:09:05.722  }'
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:05.722   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.982  [2024-12-16 11:30:31.939932] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:05.982  [2024-12-16 11:30:31.940043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.982  [2024-12-16 11:30:31.951928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:05.982  [2024-12-16 11:30:31.952023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:05.982  [2024-12-16 11:30:31.952038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:05.982  [2024-12-16 11:30:31.952052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.982  [2024-12-16 11:30:31.973685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:05.982  BaseBdev1
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.982   11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.982  [
00:09:05.982  {
00:09:05.982  "name": "BaseBdev1",
00:09:05.982  "aliases": [
00:09:05.982  "1d0efc9d-ccf6-40cd-8f96-8845f1c69b8e"
00:09:05.982  ],
00:09:05.982  "product_name": "Malloc disk",
00:09:05.982  "block_size": 512,
00:09:05.982  "num_blocks": 65536,
00:09:05.982  "uuid": "1d0efc9d-ccf6-40cd-8f96-8845f1c69b8e",
00:09:05.982  "assigned_rate_limits": {
00:09:05.982  "rw_ios_per_sec": 0,
00:09:05.982  "rw_mbytes_per_sec": 0,
00:09:05.982  "r_mbytes_per_sec": 0,
00:09:05.982  "w_mbytes_per_sec": 0
00:09:05.982  },
00:09:05.982  "claimed": true,
00:09:05.982  "claim_type": "exclusive_write",
00:09:05.982  "zoned": false,
00:09:05.982  "supported_io_types": {
00:09:05.982  "read": true,
00:09:05.982  "write": true,
00:09:05.982  "unmap": true,
00:09:05.982  "flush": true,
00:09:05.982  "reset": true,
00:09:05.982  "nvme_admin": false,
00:09:05.982  "nvme_io": false,
00:09:05.982  "nvme_io_md": false,
00:09:05.982  "write_zeroes": true,
00:09:05.982  "zcopy": true,
00:09:05.982  "get_zone_info": false,
00:09:05.982  "zone_management": false,
00:09:05.982  "zone_append": false,
00:09:05.982  "compare": false,
00:09:05.982  "compare_and_write": false,
00:09:05.982  "abort": true,
00:09:05.982  "seek_hole": false,
00:09:05.982  "seek_data": false,
00:09:05.982  "copy": true,
00:09:05.982  "nvme_iov_md": false
00:09:05.982  },
00:09:05.982  "memory_domains": [
00:09:05.982  {
00:09:05.982  "dma_device_id": "system",
00:09:05.982  "dma_device_type": 1
00:09:05.982  },
00:09:05.982  {
00:09:05.982  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:05.982  "dma_device_type": 2
00:09:05.982  }
00:09:05.982  ],
00:09:05.982  "driver_specific": {}
00:09:05.982  }
00:09:05.982  ]
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:05.982   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:05.982    11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:05.982    11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:05.982    11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:05.982    11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:05.982    11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:06.242   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:06.242    "name": "Existed_Raid",
00:09:06.242    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:06.242    "strip_size_kb": 0,
00:09:06.242    "state": "configuring",
00:09:06.242    "raid_level": "raid1",
00:09:06.242    "superblock": false,
00:09:06.242    "num_base_bdevs": 2,
00:09:06.242    "num_base_bdevs_discovered": 1,
00:09:06.242    "num_base_bdevs_operational": 2,
00:09:06.242    "base_bdevs_list": [
00:09:06.242      {
00:09:06.242        "name": "BaseBdev1",
00:09:06.242        "uuid": "1d0efc9d-ccf6-40cd-8f96-8845f1c69b8e",
00:09:06.242        "is_configured": true,
00:09:06.242        "data_offset": 0,
00:09:06.242        "data_size": 65536
00:09:06.242      },
00:09:06.242      {
00:09:06.242        "name": "BaseBdev2",
00:09:06.242        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:06.242        "is_configured": false,
00:09:06.242        "data_offset": 0,
00:09:06.242        "data_size": 0
00:09:06.242      }
00:09:06.242    ]
00:09:06.242  }'
00:09:06.242   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:06.242   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:06.501  [2024-12-16 11:30:32.484865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:06.501  [2024-12-16 11:30:32.484952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:06.501  [2024-12-16 11:30:32.496879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:06.501  [2024-12-16 11:30:32.499120] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:06.501  [2024-12-16 11:30:32.499226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:06.501    11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:06.501    11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:06.501    11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:06.501    11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:06.501    11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:06.501    "name": "Existed_Raid",
00:09:06.501    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:06.501    "strip_size_kb": 0,
00:09:06.501    "state": "configuring",
00:09:06.501    "raid_level": "raid1",
00:09:06.501    "superblock": false,
00:09:06.501    "num_base_bdevs": 2,
00:09:06.501    "num_base_bdevs_discovered": 1,
00:09:06.501    "num_base_bdevs_operational": 2,
00:09:06.501    "base_bdevs_list": [
00:09:06.501      {
00:09:06.501        "name": "BaseBdev1",
00:09:06.501        "uuid": "1d0efc9d-ccf6-40cd-8f96-8845f1c69b8e",
00:09:06.501        "is_configured": true,
00:09:06.501        "data_offset": 0,
00:09:06.501        "data_size": 65536
00:09:06.501      },
00:09:06.501      {
00:09:06.501        "name": "BaseBdev2",
00:09:06.501        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:06.501        "is_configured": false,
00:09:06.501        "data_offset": 0,
00:09:06.501        "data_size": 0
00:09:06.501      }
00:09:06.501    ]
00:09:06.501  }'
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:06.501   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.070  [2024-12-16 11:30:32.996519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:07.070  [2024-12-16 11:30:32.996715] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:07.070  [2024-12-16 11:30:32.996771] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:09:07.070  [2024-12-16 11:30:32.997226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:09:07.070  [2024-12-16 11:30:32.997473] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:07.070  [2024-12-16 11:30:32.997579] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:09:07.070  [2024-12-16 11:30:32.997940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:07.070  BaseBdev2
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:07.070   11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.070  [
00:09:07.070  {
00:09:07.070  "name": "BaseBdev2",
00:09:07.070  "aliases": [
00:09:07.070  "8d429877-bea6-49aa-9b96-ab0fcdf302c3"
00:09:07.070  ],
00:09:07.070  "product_name": "Malloc disk",
00:09:07.070  "block_size": 512,
00:09:07.070  "num_blocks": 65536,
00:09:07.070  "uuid": "8d429877-bea6-49aa-9b96-ab0fcdf302c3",
00:09:07.070  "assigned_rate_limits": {
00:09:07.070  "rw_ios_per_sec": 0,
00:09:07.070  "rw_mbytes_per_sec": 0,
00:09:07.070  "r_mbytes_per_sec": 0,
00:09:07.070  "w_mbytes_per_sec": 0
00:09:07.070  },
00:09:07.070  "claimed": true,
00:09:07.070  "claim_type": "exclusive_write",
00:09:07.070  "zoned": false,
00:09:07.070  "supported_io_types": {
00:09:07.070  "read": true,
00:09:07.070  "write": true,
00:09:07.070  "unmap": true,
00:09:07.070  "flush": true,
00:09:07.070  "reset": true,
00:09:07.070  "nvme_admin": false,
00:09:07.070  "nvme_io": false,
00:09:07.070  "nvme_io_md": false,
00:09:07.070  "write_zeroes": true,
00:09:07.070  "zcopy": true,
00:09:07.070  "get_zone_info": false,
00:09:07.070  "zone_management": false,
00:09:07.070  "zone_append": false,
00:09:07.070  "compare": false,
00:09:07.070  "compare_and_write": false,
00:09:07.070  "abort": true,
00:09:07.070  "seek_hole": false,
00:09:07.070  "seek_data": false,
00:09:07.070  "copy": true,
00:09:07.070  "nvme_iov_md": false
00:09:07.070  },
00:09:07.070  "memory_domains": [
00:09:07.070  {
00:09:07.070  "dma_device_id": "system",
00:09:07.070  "dma_device_type": 1
00:09:07.070  },
00:09:07.070  {
00:09:07.070  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:07.070  "dma_device_type": 2
00:09:07.070  }
00:09:07.070  ],
00:09:07.070  "driver_specific": {}
00:09:07.070  }
00:09:07.070  ]
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:07.070   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:07.071    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:07.071    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.071    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:07.071    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.071    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:07.071    "name": "Existed_Raid",
00:09:07.071    "uuid": "7d7a7e2a-2193-4ec0-b9a8-55b6a9c3b131",
00:09:07.071    "strip_size_kb": 0,
00:09:07.071    "state": "online",
00:09:07.071    "raid_level": "raid1",
00:09:07.071    "superblock": false,
00:09:07.071    "num_base_bdevs": 2,
00:09:07.071    "num_base_bdevs_discovered": 2,
00:09:07.071    "num_base_bdevs_operational": 2,
00:09:07.071    "base_bdevs_list": [
00:09:07.071      {
00:09:07.071        "name": "BaseBdev1",
00:09:07.071        "uuid": "1d0efc9d-ccf6-40cd-8f96-8845f1c69b8e",
00:09:07.071        "is_configured": true,
00:09:07.071        "data_offset": 0,
00:09:07.071        "data_size": 65536
00:09:07.071      },
00:09:07.071      {
00:09:07.071        "name": "BaseBdev2",
00:09:07.071        "uuid": "8d429877-bea6-49aa-9b96-ab0fcdf302c3",
00:09:07.071        "is_configured": true,
00:09:07.071        "data_offset": 0,
00:09:07.071        "data_size": 65536
00:09:07.071      }
00:09:07.071    ]
00:09:07.071  }'
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:07.071   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.641  [2024-12-16 11:30:33.500076] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:07.641    "name": "Existed_Raid",
00:09:07.641    "aliases": [
00:09:07.641      "7d7a7e2a-2193-4ec0-b9a8-55b6a9c3b131"
00:09:07.641    ],
00:09:07.641    "product_name": "Raid Volume",
00:09:07.641    "block_size": 512,
00:09:07.641    "num_blocks": 65536,
00:09:07.641    "uuid": "7d7a7e2a-2193-4ec0-b9a8-55b6a9c3b131",
00:09:07.641    "assigned_rate_limits": {
00:09:07.641      "rw_ios_per_sec": 0,
00:09:07.641      "rw_mbytes_per_sec": 0,
00:09:07.641      "r_mbytes_per_sec": 0,
00:09:07.641      "w_mbytes_per_sec": 0
00:09:07.641    },
00:09:07.641    "claimed": false,
00:09:07.641    "zoned": false,
00:09:07.641    "supported_io_types": {
00:09:07.641      "read": true,
00:09:07.641      "write": true,
00:09:07.641      "unmap": false,
00:09:07.641      "flush": false,
00:09:07.641      "reset": true,
00:09:07.641      "nvme_admin": false,
00:09:07.641      "nvme_io": false,
00:09:07.641      "nvme_io_md": false,
00:09:07.641      "write_zeroes": true,
00:09:07.641      "zcopy": false,
00:09:07.641      "get_zone_info": false,
00:09:07.641      "zone_management": false,
00:09:07.641      "zone_append": false,
00:09:07.641      "compare": false,
00:09:07.641      "compare_and_write": false,
00:09:07.641      "abort": false,
00:09:07.641      "seek_hole": false,
00:09:07.641      "seek_data": false,
00:09:07.641      "copy": false,
00:09:07.641      "nvme_iov_md": false
00:09:07.641    },
00:09:07.641    "memory_domains": [
00:09:07.641      {
00:09:07.641        "dma_device_id": "system",
00:09:07.641        "dma_device_type": 1
00:09:07.641      },
00:09:07.641      {
00:09:07.641        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:07.641        "dma_device_type": 2
00:09:07.641      },
00:09:07.641      {
00:09:07.641        "dma_device_id": "system",
00:09:07.641        "dma_device_type": 1
00:09:07.641      },
00:09:07.641      {
00:09:07.641        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:07.641        "dma_device_type": 2
00:09:07.641      }
00:09:07.641    ],
00:09:07.641    "driver_specific": {
00:09:07.641      "raid": {
00:09:07.641        "uuid": "7d7a7e2a-2193-4ec0-b9a8-55b6a9c3b131",
00:09:07.641        "strip_size_kb": 0,
00:09:07.641        "state": "online",
00:09:07.641        "raid_level": "raid1",
00:09:07.641        "superblock": false,
00:09:07.641        "num_base_bdevs": 2,
00:09:07.641        "num_base_bdevs_discovered": 2,
00:09:07.641        "num_base_bdevs_operational": 2,
00:09:07.641        "base_bdevs_list": [
00:09:07.641          {
00:09:07.641            "name": "BaseBdev1",
00:09:07.641            "uuid": "1d0efc9d-ccf6-40cd-8f96-8845f1c69b8e",
00:09:07.641            "is_configured": true,
00:09:07.641            "data_offset": 0,
00:09:07.641            "data_size": 65536
00:09:07.641          },
00:09:07.641          {
00:09:07.641            "name": "BaseBdev2",
00:09:07.641            "uuid": "8d429877-bea6-49aa-9b96-ab0fcdf302c3",
00:09:07.641            "is_configured": true,
00:09:07.641            "data_offset": 0,
00:09:07.641            "data_size": 65536
00:09:07.641          }
00:09:07.641        ]
00:09:07.641      }
00:09:07.641    }
00:09:07.641  }'
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:09:07.641  BaseBdev2'
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:07.641   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.641    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.901    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.901  [2024-12-16 11:30:33.751447] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:07.901    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:07.901    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:07.901    11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:07.901    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:07.901    11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:07.901    "name": "Existed_Raid",
00:09:07.901    "uuid": "7d7a7e2a-2193-4ec0-b9a8-55b6a9c3b131",
00:09:07.901    "strip_size_kb": 0,
00:09:07.901    "state": "online",
00:09:07.901    "raid_level": "raid1",
00:09:07.901    "superblock": false,
00:09:07.901    "num_base_bdevs": 2,
00:09:07.901    "num_base_bdevs_discovered": 1,
00:09:07.901    "num_base_bdevs_operational": 1,
00:09:07.901    "base_bdevs_list": [
00:09:07.901      {
00:09:07.901        "name": null,
00:09:07.901        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:07.901        "is_configured": false,
00:09:07.901        "data_offset": 0,
00:09:07.901        "data_size": 65536
00:09:07.901      },
00:09:07.901      {
00:09:07.901        "name": "BaseBdev2",
00:09:07.901        "uuid": "8d429877-bea6-49aa-9b96-ab0fcdf302c3",
00:09:07.901        "is_configured": true,
00:09:07.901        "data_offset": 0,
00:09:07.901        "data_size": 65536
00:09:07.901      }
00:09:07.901    ]
00:09:07.901  }'
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:07.901   11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:08.160   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:09:08.160   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:08.160    11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:08.160    11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:08.160    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:08.160    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:08.419  [2024-12-16 11:30:34.270794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:09:08.419  [2024-12-16 11:30:34.270959] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:08.419  [2024-12-16 11:30:34.283362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:08.419  [2024-12-16 11:30:34.283513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:08.419  [2024-12-16 11:30:34.283594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74244
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74244 ']'
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74244
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:08.419    11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74244
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74244'
00:09:08.419  killing process with pid 74244
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74244
00:09:08.419  [2024-12-16 11:30:34.372081] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:08.419   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74244
00:09:08.419  [2024-12-16 11:30:34.373150] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:09:08.679  
00:09:08.679  real	0m4.177s
00:09:08.679  user	0m6.610s
00:09:08.679  sys	0m0.840s
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:08.679  ************************************
00:09:08.679  END TEST raid_state_function_test
00:09:08.679  ************************************
00:09:08.679   11:30:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true
00:09:08.679   11:30:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:08.679   11:30:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:08.679   11:30:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:08.679  ************************************
00:09:08.679  START TEST raid_state_function_test_sb
00:09:08.679  ************************************
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:08.679    11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74486
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:09:08.679  Process raid pid: 74486
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74486'
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74486
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74486 ']'
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:08.679  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:08.679   11:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:08.938  [2024-12-16 11:30:34.798204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:08.938  [2024-12-16 11:30:34.798446] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:08.938  [2024-12-16 11:30:34.949692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:08.939  [2024-12-16 11:30:35.002128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:09.197  [2024-12-16 11:30:35.048455] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:09.197  [2024-12-16 11:30:35.048503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:09.764  [2024-12-16 11:30:35.720162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:09.764  [2024-12-16 11:30:35.720303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:09.764  [2024-12-16 11:30:35.720323] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:09.764  [2024-12-16 11:30:35.720335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:09.764    11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:09.764    11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:09.764    11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:09.764    11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:09.764    11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:09.764   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:09.764    "name": "Existed_Raid",
00:09:09.764    "uuid": "f312b844-7af1-43ac-bd8f-fd605b0319b0",
00:09:09.764    "strip_size_kb": 0,
00:09:09.764    "state": "configuring",
00:09:09.764    "raid_level": "raid1",
00:09:09.764    "superblock": true,
00:09:09.764    "num_base_bdevs": 2,
00:09:09.764    "num_base_bdevs_discovered": 0,
00:09:09.764    "num_base_bdevs_operational": 2,
00:09:09.764    "base_bdevs_list": [
00:09:09.764      {
00:09:09.764        "name": "BaseBdev1",
00:09:09.764        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:09.764        "is_configured": false,
00:09:09.764        "data_offset": 0,
00:09:09.764        "data_size": 0
00:09:09.764      },
00:09:09.764      {
00:09:09.764        "name": "BaseBdev2",
00:09:09.764        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:09.764        "is_configured": false,
00:09:09.764        "data_offset": 0,
00:09:09.764        "data_size": 0
00:09:09.764      }
00:09:09.764    ]
00:09:09.764  }'
00:09:09.765   11:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:09.765   11:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.334  [2024-12-16 11:30:36.219258] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:10.334  [2024-12-16 11:30:36.219404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.334  [2024-12-16 11:30:36.227303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:10.334  [2024-12-16 11:30:36.227361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:10.334  [2024-12-16 11:30:36.227377] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:10.334  [2024-12-16 11:30:36.227394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.334  [2024-12-16 11:30:36.247222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:10.334  BaseBdev1
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.334  [
00:09:10.334  {
00:09:10.334  "name": "BaseBdev1",
00:09:10.334  "aliases": [
00:09:10.334  "732ee520-3f7b-403c-b2a8-4f9265ded3c9"
00:09:10.334  ],
00:09:10.334  "product_name": "Malloc disk",
00:09:10.334  "block_size": 512,
00:09:10.334  "num_blocks": 65536,
00:09:10.334  "uuid": "732ee520-3f7b-403c-b2a8-4f9265ded3c9",
00:09:10.334  "assigned_rate_limits": {
00:09:10.334  "rw_ios_per_sec": 0,
00:09:10.334  "rw_mbytes_per_sec": 0,
00:09:10.334  "r_mbytes_per_sec": 0,
00:09:10.334  "w_mbytes_per_sec": 0
00:09:10.334  },
00:09:10.334  "claimed": true,
00:09:10.334  "claim_type": "exclusive_write",
00:09:10.334  "zoned": false,
00:09:10.334  "supported_io_types": {
00:09:10.334  "read": true,
00:09:10.334  "write": true,
00:09:10.334  "unmap": true,
00:09:10.334  "flush": true,
00:09:10.334  "reset": true,
00:09:10.334  "nvme_admin": false,
00:09:10.334  "nvme_io": false,
00:09:10.334  "nvme_io_md": false,
00:09:10.334  "write_zeroes": true,
00:09:10.334  "zcopy": true,
00:09:10.334  "get_zone_info": false,
00:09:10.334  "zone_management": false,
00:09:10.334  "zone_append": false,
00:09:10.334  "compare": false,
00:09:10.334  "compare_and_write": false,
00:09:10.334  "abort": true,
00:09:10.334  "seek_hole": false,
00:09:10.334  "seek_data": false,
00:09:10.334  "copy": true,
00:09:10.334  "nvme_iov_md": false
00:09:10.334  },
00:09:10.334  "memory_domains": [
00:09:10.334  {
00:09:10.334  "dma_device_id": "system",
00:09:10.334  "dma_device_type": 1
00:09:10.334  },
00:09:10.334  {
00:09:10.334  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:10.334  "dma_device_type": 2
00:09:10.334  }
00:09:10.334  ],
00:09:10.334  "driver_specific": {}
00:09:10.334  }
00:09:10.334  ]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:10.334    11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:10.334    11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.334    11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.334    11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:10.334    11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.334   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:10.334    "name": "Existed_Raid",
00:09:10.335    "uuid": "e9497f8a-7714-4491-afaa-9fbee6b197be",
00:09:10.335    "strip_size_kb": 0,
00:09:10.335    "state": "configuring",
00:09:10.335    "raid_level": "raid1",
00:09:10.335    "superblock": true,
00:09:10.335    "num_base_bdevs": 2,
00:09:10.335    "num_base_bdevs_discovered": 1,
00:09:10.335    "num_base_bdevs_operational": 2,
00:09:10.335    "base_bdevs_list": [
00:09:10.335      {
00:09:10.335        "name": "BaseBdev1",
00:09:10.335        "uuid": "732ee520-3f7b-403c-b2a8-4f9265ded3c9",
00:09:10.335        "is_configured": true,
00:09:10.335        "data_offset": 2048,
00:09:10.335        "data_size": 63488
00:09:10.335      },
00:09:10.335      {
00:09:10.335        "name": "BaseBdev2",
00:09:10.335        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:10.335        "is_configured": false,
00:09:10.335        "data_offset": 0,
00:09:10.335        "data_size": 0
00:09:10.335      }
00:09:10.335    ]
00:09:10.335  }'
00:09:10.335   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:10.335   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.904  [2024-12-16 11:30:36.730478] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:10.904  [2024-12-16 11:30:36.730564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.904  [2024-12-16 11:30:36.738511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:10.904  [2024-12-16 11:30:36.740756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:10.904  [2024-12-16 11:30:36.740809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:10.904    11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:10.904    11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:10.904    11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:10.904    11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:10.904    11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:10.904    "name": "Existed_Raid",
00:09:10.904    "uuid": "f9fb433f-e56f-4367-9682-5dbe53f86c48",
00:09:10.904    "strip_size_kb": 0,
00:09:10.904    "state": "configuring",
00:09:10.904    "raid_level": "raid1",
00:09:10.904    "superblock": true,
00:09:10.904    "num_base_bdevs": 2,
00:09:10.904    "num_base_bdevs_discovered": 1,
00:09:10.904    "num_base_bdevs_operational": 2,
00:09:10.904    "base_bdevs_list": [
00:09:10.904      {
00:09:10.904        "name": "BaseBdev1",
00:09:10.904        "uuid": "732ee520-3f7b-403c-b2a8-4f9265ded3c9",
00:09:10.904        "is_configured": true,
00:09:10.904        "data_offset": 2048,
00:09:10.904        "data_size": 63488
00:09:10.904      },
00:09:10.904      {
00:09:10.904        "name": "BaseBdev2",
00:09:10.904        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:10.904        "is_configured": false,
00:09:10.904        "data_offset": 0,
00:09:10.904        "data_size": 0
00:09:10.904      }
00:09:10.904    ]
00:09:10.904  }'
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:10.904   11:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.472  [2024-12-16 11:30:37.259304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:11.472  [2024-12-16 11:30:37.259677] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:11.472  [2024-12-16 11:30:37.259745] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:09:11.472  [2024-12-16 11:30:37.260203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:09:11.472  BaseBdev2
00:09:11.472  [2024-12-16 11:30:37.260503] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:11.472  [2024-12-16 11:30:37.260627] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:09:11.472  [2024-12-16 11:30:37.260861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.472   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.472  [
00:09:11.472  {
00:09:11.472  "name": "BaseBdev2",
00:09:11.472  "aliases": [
00:09:11.472  "18da59a6-3ca5-4e19-a084-4d947dc80299"
00:09:11.472  ],
00:09:11.472  "product_name": "Malloc disk",
00:09:11.472  "block_size": 512,
00:09:11.472  "num_blocks": 65536,
00:09:11.472  "uuid": "18da59a6-3ca5-4e19-a084-4d947dc80299",
00:09:11.472  "assigned_rate_limits": {
00:09:11.472  "rw_ios_per_sec": 0,
00:09:11.472  "rw_mbytes_per_sec": 0,
00:09:11.472  "r_mbytes_per_sec": 0,
00:09:11.472  "w_mbytes_per_sec": 0
00:09:11.472  },
00:09:11.472  "claimed": true,
00:09:11.472  "claim_type": "exclusive_write",
00:09:11.472  "zoned": false,
00:09:11.472  "supported_io_types": {
00:09:11.472  "read": true,
00:09:11.472  "write": true,
00:09:11.472  "unmap": true,
00:09:11.472  "flush": true,
00:09:11.472  "reset": true,
00:09:11.472  "nvme_admin": false,
00:09:11.472  "nvme_io": false,
00:09:11.472  "nvme_io_md": false,
00:09:11.472  "write_zeroes": true,
00:09:11.472  "zcopy": true,
00:09:11.472  "get_zone_info": false,
00:09:11.472  "zone_management": false,
00:09:11.472  "zone_append": false,
00:09:11.472  "compare": false,
00:09:11.472  "compare_and_write": false,
00:09:11.472  "abort": true,
00:09:11.472  "seek_hole": false,
00:09:11.472  "seek_data": false,
00:09:11.472  "copy": true,
00:09:11.473  "nvme_iov_md": false
00:09:11.473  },
00:09:11.473  "memory_domains": [
00:09:11.473  {
00:09:11.473  "dma_device_id": "system",
00:09:11.473  "dma_device_type": 1
00:09:11.473  },
00:09:11.473  {
00:09:11.473  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:11.473  "dma_device_type": 2
00:09:11.473  }
00:09:11.473  ],
00:09:11.473  "driver_specific": {}
00:09:11.473  }
00:09:11.473  ]
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:11.473    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:11.473    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:11.473    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.473    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.473    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:11.473    "name": "Existed_Raid",
00:09:11.473    "uuid": "f9fb433f-e56f-4367-9682-5dbe53f86c48",
00:09:11.473    "strip_size_kb": 0,
00:09:11.473    "state": "online",
00:09:11.473    "raid_level": "raid1",
00:09:11.473    "superblock": true,
00:09:11.473    "num_base_bdevs": 2,
00:09:11.473    "num_base_bdevs_discovered": 2,
00:09:11.473    "num_base_bdevs_operational": 2,
00:09:11.473    "base_bdevs_list": [
00:09:11.473      {
00:09:11.473        "name": "BaseBdev1",
00:09:11.473        "uuid": "732ee520-3f7b-403c-b2a8-4f9265ded3c9",
00:09:11.473        "is_configured": true,
00:09:11.473        "data_offset": 2048,
00:09:11.473        "data_size": 63488
00:09:11.473      },
00:09:11.473      {
00:09:11.473        "name": "BaseBdev2",
00:09:11.473        "uuid": "18da59a6-3ca5-4e19-a084-4d947dc80299",
00:09:11.473        "is_configured": true,
00:09:11.473        "data_offset": 2048,
00:09:11.473        "data_size": 63488
00:09:11.473      }
00:09:11.473    ]
00:09:11.473  }'
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:11.473   11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.733   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:09:11.733   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:09:11.733   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:11.733   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:11.733   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:09:11.733   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:11.733    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:09:11.733    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.733    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.733    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:11.733  [2024-12-16 11:30:37.778871] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:11.733    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.993   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:11.993    "name": "Existed_Raid",
00:09:11.993    "aliases": [
00:09:11.993      "f9fb433f-e56f-4367-9682-5dbe53f86c48"
00:09:11.993    ],
00:09:11.993    "product_name": "Raid Volume",
00:09:11.993    "block_size": 512,
00:09:11.993    "num_blocks": 63488,
00:09:11.993    "uuid": "f9fb433f-e56f-4367-9682-5dbe53f86c48",
00:09:11.993    "assigned_rate_limits": {
00:09:11.993      "rw_ios_per_sec": 0,
00:09:11.993      "rw_mbytes_per_sec": 0,
00:09:11.993      "r_mbytes_per_sec": 0,
00:09:11.993      "w_mbytes_per_sec": 0
00:09:11.993    },
00:09:11.993    "claimed": false,
00:09:11.993    "zoned": false,
00:09:11.993    "supported_io_types": {
00:09:11.993      "read": true,
00:09:11.993      "write": true,
00:09:11.993      "unmap": false,
00:09:11.993      "flush": false,
00:09:11.993      "reset": true,
00:09:11.993      "nvme_admin": false,
00:09:11.993      "nvme_io": false,
00:09:11.993      "nvme_io_md": false,
00:09:11.993      "write_zeroes": true,
00:09:11.993      "zcopy": false,
00:09:11.993      "get_zone_info": false,
00:09:11.993      "zone_management": false,
00:09:11.993      "zone_append": false,
00:09:11.993      "compare": false,
00:09:11.993      "compare_and_write": false,
00:09:11.993      "abort": false,
00:09:11.993      "seek_hole": false,
00:09:11.993      "seek_data": false,
00:09:11.993      "copy": false,
00:09:11.993      "nvme_iov_md": false
00:09:11.993    },
00:09:11.993    "memory_domains": [
00:09:11.993      {
00:09:11.993        "dma_device_id": "system",
00:09:11.993        "dma_device_type": 1
00:09:11.993      },
00:09:11.993      {
00:09:11.993        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:11.993        "dma_device_type": 2
00:09:11.993      },
00:09:11.993      {
00:09:11.993        "dma_device_id": "system",
00:09:11.993        "dma_device_type": 1
00:09:11.993      },
00:09:11.993      {
00:09:11.993        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:11.993        "dma_device_type": 2
00:09:11.993      }
00:09:11.993    ],
00:09:11.993    "driver_specific": {
00:09:11.993      "raid": {
00:09:11.993        "uuid": "f9fb433f-e56f-4367-9682-5dbe53f86c48",
00:09:11.993        "strip_size_kb": 0,
00:09:11.993        "state": "online",
00:09:11.993        "raid_level": "raid1",
00:09:11.993        "superblock": true,
00:09:11.993        "num_base_bdevs": 2,
00:09:11.993        "num_base_bdevs_discovered": 2,
00:09:11.993        "num_base_bdevs_operational": 2,
00:09:11.993        "base_bdevs_list": [
00:09:11.993          {
00:09:11.993            "name": "BaseBdev1",
00:09:11.993            "uuid": "732ee520-3f7b-403c-b2a8-4f9265ded3c9",
00:09:11.993            "is_configured": true,
00:09:11.993            "data_offset": 2048,
00:09:11.993            "data_size": 63488
00:09:11.993          },
00:09:11.993          {
00:09:11.993            "name": "BaseBdev2",
00:09:11.993            "uuid": "18da59a6-3ca5-4e19-a084-4d947dc80299",
00:09:11.993            "is_configured": true,
00:09:11.993            "data_offset": 2048,
00:09:11.993            "data_size": 63488
00:09:11.993          }
00:09:11.993        ]
00:09:11.993      }
00:09:11.993    }
00:09:11.993  }'
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:11.993   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:09:11.993  BaseBdev2'
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:11.993   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:11.993   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.993   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:11.993   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:11.993   11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.993    11:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:11.993  [2024-12-16 11:30:38.030171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:11.993   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:11.993    11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:11.993    11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:11.993    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:11.993    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:12.252    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:12.252   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:12.252    "name": "Existed_Raid",
00:09:12.252    "uuid": "f9fb433f-e56f-4367-9682-5dbe53f86c48",
00:09:12.252    "strip_size_kb": 0,
00:09:12.252    "state": "online",
00:09:12.252    "raid_level": "raid1",
00:09:12.252    "superblock": true,
00:09:12.252    "num_base_bdevs": 2,
00:09:12.252    "num_base_bdevs_discovered": 1,
00:09:12.252    "num_base_bdevs_operational": 1,
00:09:12.252    "base_bdevs_list": [
00:09:12.252      {
00:09:12.252        "name": null,
00:09:12.252        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:12.252        "is_configured": false,
00:09:12.252        "data_offset": 0,
00:09:12.252        "data_size": 63488
00:09:12.252      },
00:09:12.252      {
00:09:12.252        "name": "BaseBdev2",
00:09:12.252        "uuid": "18da59a6-3ca5-4e19-a084-4d947dc80299",
00:09:12.252        "is_configured": true,
00:09:12.252        "data_offset": 2048,
00:09:12.252        "data_size": 63488
00:09:12.252      }
00:09:12.252    ]
00:09:12.252  }'
00:09:12.252   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:12.252   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:12.511   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:09:12.511   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:12.511    11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:12.511    11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:12.511    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:12.511    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:12.511    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:12.511   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:12.511   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:12.511   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:12.771  [2024-12-16 11:30:38.581341] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:09:12.771  [2024-12-16 11:30:38.581456] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:12.771  [2024-12-16 11:30:38.593599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:12.771  [2024-12-16 11:30:38.593655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:12.771  [2024-12-16 11:30:38.593669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:12.771    11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:09:12.771    11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:12.771    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:12.771    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:12.771    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74486
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74486 ']'
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74486
00:09:12.771    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:12.771    11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74486
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74486'
00:09:12.771  killing process with pid 74486
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74486
00:09:12.771  [2024-12-16 11:30:38.697574] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:12.771   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74486
00:09:12.771  [2024-12-16 11:30:38.698668] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:13.030   11:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:09:13.030  
00:09:13.030  real	0m4.263s
00:09:13.030  user	0m6.773s
00:09:13.030  sys	0m0.811s
00:09:13.030   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:13.030   11:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:13.030  ************************************
00:09:13.030  END TEST raid_state_function_test_sb
00:09:13.030  ************************************
00:09:13.030   11:30:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2
00:09:13.030   11:30:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:09:13.030   11:30:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:13.030   11:30:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:13.030  ************************************
00:09:13.030  START TEST raid_superblock_test
00:09:13.030  ************************************
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:09:13.030   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']'
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74727
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74727
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74727 ']'
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:13.031  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:13.031   11:30:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:13.289  [2024-12-16 11:30:39.110016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:13.289  [2024-12-16 11:30:39.110257] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74727 ]
00:09:13.289  [2024-12-16 11:30:39.275644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:13.289  [2024-12-16 11:30:39.336692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:13.549  [2024-12-16 11:30:39.382718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:13.549  [2024-12-16 11:30:39.382764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.120  malloc1
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.120  [2024-12-16 11:30:40.051136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:09:14.120  [2024-12-16 11:30:40.051284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:14.120  [2024-12-16 11:30:40.051349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:09:14.120  [2024-12-16 11:30:40.051409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:14.120  [2024-12-16 11:30:40.054002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:14.120  [2024-12-16 11:30:40.054092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:09:14.120  pt1
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.120  malloc2
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.120  [2024-12-16 11:30:40.094602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:09:14.120  [2024-12-16 11:30:40.094748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:14.120  [2024-12-16 11:30:40.094809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:09:14.120  [2024-12-16 11:30:40.094895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:14.120  [2024-12-16 11:30:40.097784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:14.120  [2024-12-16 11:30:40.097872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:09:14.120  pt2
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.120  [2024-12-16 11:30:40.106675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:09:14.120  [2024-12-16 11:30:40.108931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:09:14.120  [2024-12-16 11:30:40.109152] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:09:14.120  [2024-12-16 11:30:40.109209] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:09:14.120  [2024-12-16 11:30:40.109545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:09:14.120  [2024-12-16 11:30:40.109786] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:09:14.120  [2024-12-16 11:30:40.109839] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:09:14.120  [2024-12-16 11:30:40.110000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:14.120    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:14.120    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:14.120    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.120    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.120    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:14.120    "name": "raid_bdev1",
00:09:14.120    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:14.120    "strip_size_kb": 0,
00:09:14.120    "state": "online",
00:09:14.120    "raid_level": "raid1",
00:09:14.120    "superblock": true,
00:09:14.120    "num_base_bdevs": 2,
00:09:14.120    "num_base_bdevs_discovered": 2,
00:09:14.120    "num_base_bdevs_operational": 2,
00:09:14.120    "base_bdevs_list": [
00:09:14.120      {
00:09:14.120        "name": "pt1",
00:09:14.120        "uuid": "00000000-0000-0000-0000-000000000001",
00:09:14.120        "is_configured": true,
00:09:14.120        "data_offset": 2048,
00:09:14.120        "data_size": 63488
00:09:14.120      },
00:09:14.120      {
00:09:14.120        "name": "pt2",
00:09:14.120        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:14.120        "is_configured": true,
00:09:14.120        "data_offset": 2048,
00:09:14.120        "data_size": 63488
00:09:14.120      }
00:09:14.120    ]
00:09:14.120  }'
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:14.120   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.688   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:09:14.688   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:09:14.688   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:14.688   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:14.688   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:14.688   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:14.688    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:14.688    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.688    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:14.688    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.688  [2024-12-16 11:30:40.538311] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:14.688    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.688   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:14.688    "name": "raid_bdev1",
00:09:14.688    "aliases": [
00:09:14.688      "cda2356e-e26a-47fe-aa83-f6250485f08e"
00:09:14.688    ],
00:09:14.688    "product_name": "Raid Volume",
00:09:14.688    "block_size": 512,
00:09:14.688    "num_blocks": 63488,
00:09:14.688    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:14.688    "assigned_rate_limits": {
00:09:14.688      "rw_ios_per_sec": 0,
00:09:14.688      "rw_mbytes_per_sec": 0,
00:09:14.688      "r_mbytes_per_sec": 0,
00:09:14.688      "w_mbytes_per_sec": 0
00:09:14.688    },
00:09:14.688    "claimed": false,
00:09:14.688    "zoned": false,
00:09:14.688    "supported_io_types": {
00:09:14.688      "read": true,
00:09:14.688      "write": true,
00:09:14.688      "unmap": false,
00:09:14.688      "flush": false,
00:09:14.688      "reset": true,
00:09:14.688      "nvme_admin": false,
00:09:14.688      "nvme_io": false,
00:09:14.688      "nvme_io_md": false,
00:09:14.688      "write_zeroes": true,
00:09:14.688      "zcopy": false,
00:09:14.688      "get_zone_info": false,
00:09:14.688      "zone_management": false,
00:09:14.688      "zone_append": false,
00:09:14.688      "compare": false,
00:09:14.688      "compare_and_write": false,
00:09:14.689      "abort": false,
00:09:14.689      "seek_hole": false,
00:09:14.689      "seek_data": false,
00:09:14.689      "copy": false,
00:09:14.689      "nvme_iov_md": false
00:09:14.689    },
00:09:14.689    "memory_domains": [
00:09:14.689      {
00:09:14.689        "dma_device_id": "system",
00:09:14.689        "dma_device_type": 1
00:09:14.689      },
00:09:14.689      {
00:09:14.689        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:14.689        "dma_device_type": 2
00:09:14.689      },
00:09:14.689      {
00:09:14.689        "dma_device_id": "system",
00:09:14.689        "dma_device_type": 1
00:09:14.689      },
00:09:14.689      {
00:09:14.689        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:14.689        "dma_device_type": 2
00:09:14.689      }
00:09:14.689    ],
00:09:14.689    "driver_specific": {
00:09:14.689      "raid": {
00:09:14.689        "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:14.689        "strip_size_kb": 0,
00:09:14.689        "state": "online",
00:09:14.689        "raid_level": "raid1",
00:09:14.689        "superblock": true,
00:09:14.689        "num_base_bdevs": 2,
00:09:14.689        "num_base_bdevs_discovered": 2,
00:09:14.689        "num_base_bdevs_operational": 2,
00:09:14.689        "base_bdevs_list": [
00:09:14.689          {
00:09:14.689            "name": "pt1",
00:09:14.689            "uuid": "00000000-0000-0000-0000-000000000001",
00:09:14.689            "is_configured": true,
00:09:14.689            "data_offset": 2048,
00:09:14.689            "data_size": 63488
00:09:14.689          },
00:09:14.689          {
00:09:14.689            "name": "pt2",
00:09:14.689            "uuid": "00000000-0000-0000-0000-000000000002",
00:09:14.689            "is_configured": true,
00:09:14.689            "data_offset": 2048,
00:09:14.689            "data_size": 63488
00:09:14.689          }
00:09:14.689        ]
00:09:14.689      }
00:09:14.689    }
00:09:14.689  }'
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:09:14.689  pt2'
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:14.689   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.689    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.689  [2024-12-16 11:30:40.749947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cda2356e-e26a-47fe-aa83-f6250485f08e
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cda2356e-e26a-47fe-aa83-f6250485f08e ']'
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.948  [2024-12-16 11:30:40.797560] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:14.948  [2024-12-16 11:30:40.797601] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:14.948  [2024-12-16 11:30:40.797692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:14.948  [2024-12-16 11:30:40.797775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:14.948  [2024-12-16 11:30:40.797787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:14.948    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.948  [2024-12-16 11:30:40.933348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:09:14.948  [2024-12-16 11:30:40.935606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:09:14.948  [2024-12-16 11:30:40.935746] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:09:14.948  [2024-12-16 11:30:40.935867] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:09:14.948  [2024-12-16 11:30:40.935928] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:14.948  [2024-12-16 11:30:40.935975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:09:14.948  request:
00:09:14.948  {
00:09:14.948  "name": "raid_bdev1",
00:09:14.948  "raid_level": "raid1",
00:09:14.948  "base_bdevs": [
00:09:14.948  "malloc1",
00:09:14.948  "malloc2"
00:09:14.948  ],
00:09:14.948  "superblock": false,
00:09:14.948  "method": "bdev_raid_create",
00:09:14.948  "req_id": 1
00:09:14.948  }
00:09:14.948  Got JSON-RPC error response
00:09:14.948  response:
00:09:14.948  {
00:09:14.948  "code": -17,
00:09:14.948  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:09:14.948  }
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:09:14.948   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:14.949  [2024-12-16 11:30:40.985212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:09:14.949  [2024-12-16 11:30:40.985328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:14.949  [2024-12-16 11:30:40.985380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:09:14.949  [2024-12-16 11:30:40.985415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:14.949  [2024-12-16 11:30:40.987943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:14.949  [2024-12-16 11:30:40.988029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:09:14.949  [2024-12-16 11:30:40.988155] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:09:14.949  [2024-12-16 11:30:40.988246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:09:14.949  pt1
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:14.949   11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:14.949    11:30:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:15.208    11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.208   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:15.208    "name": "raid_bdev1",
00:09:15.208    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:15.208    "strip_size_kb": 0,
00:09:15.208    "state": "configuring",
00:09:15.208    "raid_level": "raid1",
00:09:15.208    "superblock": true,
00:09:15.208    "num_base_bdevs": 2,
00:09:15.208    "num_base_bdevs_discovered": 1,
00:09:15.208    "num_base_bdevs_operational": 2,
00:09:15.208    "base_bdevs_list": [
00:09:15.208      {
00:09:15.208        "name": "pt1",
00:09:15.208        "uuid": "00000000-0000-0000-0000-000000000001",
00:09:15.208        "is_configured": true,
00:09:15.208        "data_offset": 2048,
00:09:15.208        "data_size": 63488
00:09:15.208      },
00:09:15.208      {
00:09:15.208        "name": null,
00:09:15.208        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:15.208        "is_configured": false,
00:09:15.208        "data_offset": 2048,
00:09:15.208        "data_size": 63488
00:09:15.208      }
00:09:15.208    ]
00:09:15.208  }'
00:09:15.208   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:15.208   11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']'
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:15.466  [2024-12-16 11:30:41.476526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:09:15.466  [2024-12-16 11:30:41.476689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:15.466  [2024-12-16 11:30:41.476748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:09:15.466  [2024-12-16 11:30:41.476785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:15.466  [2024-12-16 11:30:41.477298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:15.466  [2024-12-16 11:30:41.477363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:09:15.466  [2024-12-16 11:30:41.477483] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:09:15.466  [2024-12-16 11:30:41.477560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:09:15.466  [2024-12-16 11:30:41.477702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:15.466  [2024-12-16 11:30:41.477745] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:09:15.466  [2024-12-16 11:30:41.478053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:09:15.466  [2024-12-16 11:30:41.478237] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:15.466  [2024-12-16 11:30:41.478292] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:09:15.466  [2024-12-16 11:30:41.478469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:15.466  pt2
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:15.466   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:15.467   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:15.467   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:15.467   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:15.467    11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:15.467    11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.467    11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:15.467    11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:15.467    11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.726   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:15.726    "name": "raid_bdev1",
00:09:15.726    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:15.726    "strip_size_kb": 0,
00:09:15.726    "state": "online",
00:09:15.726    "raid_level": "raid1",
00:09:15.726    "superblock": true,
00:09:15.726    "num_base_bdevs": 2,
00:09:15.726    "num_base_bdevs_discovered": 2,
00:09:15.726    "num_base_bdevs_operational": 2,
00:09:15.726    "base_bdevs_list": [
00:09:15.726      {
00:09:15.726        "name": "pt1",
00:09:15.726        "uuid": "00000000-0000-0000-0000-000000000001",
00:09:15.726        "is_configured": true,
00:09:15.726        "data_offset": 2048,
00:09:15.726        "data_size": 63488
00:09:15.726      },
00:09:15.726      {
00:09:15.726        "name": "pt2",
00:09:15.726        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:15.726        "is_configured": true,
00:09:15.726        "data_offset": 2048,
00:09:15.726        "data_size": 63488
00:09:15.726      }
00:09:15.726    ]
00:09:15.726  }'
00:09:15.726   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:15.726   11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:15.984   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:09:15.984   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:09:15.984   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:15.984   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:15.984   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:15.984   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:15.984    11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:15.984    11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.984    11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:15.984    11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:15.984  [2024-12-16 11:30:41.928067] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:15.984    11:30:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.984   11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:15.984    "name": "raid_bdev1",
00:09:15.984    "aliases": [
00:09:15.984      "cda2356e-e26a-47fe-aa83-f6250485f08e"
00:09:15.984    ],
00:09:15.984    "product_name": "Raid Volume",
00:09:15.984    "block_size": 512,
00:09:15.984    "num_blocks": 63488,
00:09:15.984    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:15.984    "assigned_rate_limits": {
00:09:15.984      "rw_ios_per_sec": 0,
00:09:15.984      "rw_mbytes_per_sec": 0,
00:09:15.984      "r_mbytes_per_sec": 0,
00:09:15.984      "w_mbytes_per_sec": 0
00:09:15.984    },
00:09:15.984    "claimed": false,
00:09:15.984    "zoned": false,
00:09:15.984    "supported_io_types": {
00:09:15.984      "read": true,
00:09:15.984      "write": true,
00:09:15.984      "unmap": false,
00:09:15.984      "flush": false,
00:09:15.984      "reset": true,
00:09:15.984      "nvme_admin": false,
00:09:15.984      "nvme_io": false,
00:09:15.984      "nvme_io_md": false,
00:09:15.984      "write_zeroes": true,
00:09:15.984      "zcopy": false,
00:09:15.984      "get_zone_info": false,
00:09:15.984      "zone_management": false,
00:09:15.985      "zone_append": false,
00:09:15.985      "compare": false,
00:09:15.985      "compare_and_write": false,
00:09:15.985      "abort": false,
00:09:15.985      "seek_hole": false,
00:09:15.985      "seek_data": false,
00:09:15.985      "copy": false,
00:09:15.985      "nvme_iov_md": false
00:09:15.985    },
00:09:15.985    "memory_domains": [
00:09:15.985      {
00:09:15.985        "dma_device_id": "system",
00:09:15.985        "dma_device_type": 1
00:09:15.985      },
00:09:15.985      {
00:09:15.985        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:15.985        "dma_device_type": 2
00:09:15.985      },
00:09:15.985      {
00:09:15.985        "dma_device_id": "system",
00:09:15.985        "dma_device_type": 1
00:09:15.985      },
00:09:15.985      {
00:09:15.985        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:15.985        "dma_device_type": 2
00:09:15.985      }
00:09:15.985    ],
00:09:15.985    "driver_specific": {
00:09:15.985      "raid": {
00:09:15.985        "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:15.985        "strip_size_kb": 0,
00:09:15.985        "state": "online",
00:09:15.985        "raid_level": "raid1",
00:09:15.985        "superblock": true,
00:09:15.985        "num_base_bdevs": 2,
00:09:15.985        "num_base_bdevs_discovered": 2,
00:09:15.985        "num_base_bdevs_operational": 2,
00:09:15.985        "base_bdevs_list": [
00:09:15.985          {
00:09:15.985            "name": "pt1",
00:09:15.985            "uuid": "00000000-0000-0000-0000-000000000001",
00:09:15.985            "is_configured": true,
00:09:15.985            "data_offset": 2048,
00:09:15.985            "data_size": 63488
00:09:15.985          },
00:09:15.985          {
00:09:15.985            "name": "pt2",
00:09:15.985            "uuid": "00000000-0000-0000-0000-000000000002",
00:09:15.985            "is_configured": true,
00:09:15.985            "data_offset": 2048,
00:09:15.985            "data_size": 63488
00:09:15.985          }
00:09:15.985        ]
00:09:15.985      }
00:09:15.985    }
00:09:15.985  }'
00:09:15.985    11:30:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:15.985   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:09:15.985  pt2'
00:09:15.985    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.244  [2024-12-16 11:30:42.163613] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cda2356e-e26a-47fe-aa83-f6250485f08e '!=' cda2356e-e26a-47fe-aa83-f6250485f08e ']'
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.244  [2024-12-16 11:30:42.215286] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.244    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.244   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:16.244    "name": "raid_bdev1",
00:09:16.245    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:16.245    "strip_size_kb": 0,
00:09:16.245    "state": "online",
00:09:16.245    "raid_level": "raid1",
00:09:16.245    "superblock": true,
00:09:16.245    "num_base_bdevs": 2,
00:09:16.245    "num_base_bdevs_discovered": 1,
00:09:16.245    "num_base_bdevs_operational": 1,
00:09:16.245    "base_bdevs_list": [
00:09:16.245      {
00:09:16.245        "name": null,
00:09:16.245        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:16.245        "is_configured": false,
00:09:16.245        "data_offset": 0,
00:09:16.245        "data_size": 63488
00:09:16.245      },
00:09:16.245      {
00:09:16.245        "name": "pt2",
00:09:16.245        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:16.245        "is_configured": true,
00:09:16.245        "data_offset": 2048,
00:09:16.245        "data_size": 63488
00:09:16.245      }
00:09:16.245    ]
00:09:16.245  }'
00:09:16.245   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:16.245   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.814  [2024-12-16 11:30:42.686462] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:16.814  [2024-12-16 11:30:42.686594] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:16.814  [2024-12-16 11:30:42.686704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:16.814  [2024-12-16 11:30:42.686764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:16.814  [2024-12-16 11:30:42.686775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.814    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:16.814    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:09:16.814    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.814    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.814    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.814  [2024-12-16 11:30:42.766321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:09:16.814  [2024-12-16 11:30:42.766432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:16.814  [2024-12-16 11:30:42.766483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:09:16.814  [2024-12-16 11:30:42.766517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:16.814  [2024-12-16 11:30:42.769056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:16.814  [2024-12-16 11:30:42.769141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:09:16.814  [2024-12-16 11:30:42.769263] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:09:16.814  [2024-12-16 11:30:42.769332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:09:16.814  [2024-12-16 11:30:42.769451] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:09:16.814  [2024-12-16 11:30:42.769493] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:09:16.814  [2024-12-16 11:30:42.769781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:09:16.814  [2024-12-16 11:30:42.769961] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:09:16.814  [2024-12-16 11:30:42.770013] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:09:16.814  [2024-12-16 11:30:42.770174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:16.814  pt2
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:16.814   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:16.815   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:16.815   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:16.815    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:16.815    11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:16.815    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:16.815    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:16.815    11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:16.815   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:16.815    "name": "raid_bdev1",
00:09:16.815    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:16.815    "strip_size_kb": 0,
00:09:16.815    "state": "online",
00:09:16.815    "raid_level": "raid1",
00:09:16.815    "superblock": true,
00:09:16.815    "num_base_bdevs": 2,
00:09:16.815    "num_base_bdevs_discovered": 1,
00:09:16.815    "num_base_bdevs_operational": 1,
00:09:16.815    "base_bdevs_list": [
00:09:16.815      {
00:09:16.815        "name": null,
00:09:16.815        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:16.815        "is_configured": false,
00:09:16.815        "data_offset": 2048,
00:09:16.815        "data_size": 63488
00:09:16.815      },
00:09:16.815      {
00:09:16.815        "name": "pt2",
00:09:16.815        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:16.815        "is_configured": true,
00:09:16.815        "data_offset": 2048,
00:09:16.815        "data_size": 63488
00:09:16.815      }
00:09:16.815    ]
00:09:16.815  }'
00:09:16.815   11:30:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:16.815   11:30:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.385  [2024-12-16 11:30:43.253606] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:17.385  [2024-12-16 11:30:43.253639] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:17.385  [2024-12-16 11:30:43.253729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:17.385  [2024-12-16 11:30:43.253781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:17.385  [2024-12-16 11:30:43.253795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']'
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.385  [2024-12-16 11:30:43.317441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:09:17.385  [2024-12-16 11:30:43.317512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:17.385  [2024-12-16 11:30:43.317548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:09:17.385  [2024-12-16 11:30:43.317569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:17.385  [2024-12-16 11:30:43.320077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:17.385  [2024-12-16 11:30:43.320176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:09:17.385  [2024-12-16 11:30:43.320265] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:09:17.385  [2024-12-16 11:30:43.320316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:09:17.385  [2024-12-16 11:30:43.320436] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:09:17.385  [2024-12-16 11:30:43.320451] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:17.385  [2024-12-16 11:30:43.320470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:09:17.385  [2024-12-16 11:30:43.320518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:09:17.385  [2024-12-16 11:30:43.320620] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:09:17.385  [2024-12-16 11:30:43.320634] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:09:17.385  [2024-12-16 11:30:43.320897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:09:17.385  [2024-12-16 11:30:43.321028] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:09:17.385  [2024-12-16 11:30:43.321040] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:09:17.385  [2024-12-16 11:30:43.321167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:17.385  pt1
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']'
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.385    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:17.385    "name": "raid_bdev1",
00:09:17.385    "uuid": "cda2356e-e26a-47fe-aa83-f6250485f08e",
00:09:17.385    "strip_size_kb": 0,
00:09:17.385    "state": "online",
00:09:17.385    "raid_level": "raid1",
00:09:17.385    "superblock": true,
00:09:17.385    "num_base_bdevs": 2,
00:09:17.385    "num_base_bdevs_discovered": 1,
00:09:17.385    "num_base_bdevs_operational": 1,
00:09:17.385    "base_bdevs_list": [
00:09:17.385      {
00:09:17.385        "name": null,
00:09:17.385        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:17.385        "is_configured": false,
00:09:17.385        "data_offset": 2048,
00:09:17.385        "data_size": 63488
00:09:17.385      },
00:09:17.385      {
00:09:17.385        "name": "pt2",
00:09:17.385        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:17.385        "is_configured": true,
00:09:17.385        "data_offset": 2048,
00:09:17.385        "data_size": 63488
00:09:17.385      }
00:09:17.385    ]
00:09:17.385  }'
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:17.385   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:17.953  [2024-12-16 11:30:43.876844] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cda2356e-e26a-47fe-aa83-f6250485f08e '!=' cda2356e-e26a-47fe-aa83-f6250485f08e ']'
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74727
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74727 ']'
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74727
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:17.953    11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74727
00:09:17.953  killing process with pid 74727
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74727'
00:09:17.953   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74727
00:09:17.953  [2024-12-16 11:30:43.963985] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:17.953  [2024-12-16 11:30:43.964089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:17.954   11:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74727
00:09:17.954  [2024-12-16 11:30:43.964146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:17.954  [2024-12-16 11:30:43.964157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:09:17.954  [2024-12-16 11:30:43.988727] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:18.213   11:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:09:18.213  
00:09:18.213  real	0m5.223s
00:09:18.213  user	0m8.559s
00:09:18.213  sys	0m1.083s
00:09:18.213   11:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:18.213  ************************************
00:09:18.213  END TEST raid_superblock_test
00:09:18.213  ************************************
00:09:18.213   11:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:18.473   11:30:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read
00:09:18.473   11:30:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:18.473   11:30:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:18.473   11:30:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:18.473  ************************************
00:09:18.473  START TEST raid_read_error_test
00:09:18.473  ************************************
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']'
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0
00:09:18.473    11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nlH3paw4ra
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75046
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75046
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75046 ']'
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:18.473  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:18.473   11:30:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:18.473  [2024-12-16 11:30:44.418577] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:18.473  [2024-12-16 11:30:44.418823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75046 ]
00:09:18.733  [2024-12-16 11:30:44.584481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:18.733  [2024-12-16 11:30:44.636888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:18.733  [2024-12-16 11:30:44.682475] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:18.733  [2024-12-16 11:30:44.682516] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.301  BaseBdev1_malloc
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.301  true
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.301   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.301  [2024-12-16 11:30:45.363397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:09:19.301  [2024-12-16 11:30:45.363457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:19.301  [2024-12-16 11:30:45.363484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:09:19.301  [2024-12-16 11:30:45.363494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:19.301  [2024-12-16 11:30:45.366055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:19.301  [2024-12-16 11:30:45.366100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:09:19.619  BaseBdev1
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.619  BaseBdev2_malloc
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.619  true
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.619  [2024-12-16 11:30:45.416183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:09:19.619  [2024-12-16 11:30:45.416245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:19.619  [2024-12-16 11:30:45.416271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:09:19.619  [2024-12-16 11:30:45.416281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:19.619  [2024-12-16 11:30:45.418707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:19.619  [2024-12-16 11:30:45.418800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:09:19.619  BaseBdev2
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.619  [2024-12-16 11:30:45.428210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:19.619  [2024-12-16 11:30:45.430398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:19.619  [2024-12-16 11:30:45.430674] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:19.619  [2024-12-16 11:30:45.430697] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:09:19.619  [2024-12-16 11:30:45.430998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:09:19.619  [2024-12-16 11:30:45.431156] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:19.619  [2024-12-16 11:30:45.431172] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:09:19.619  [2024-12-16 11:30:45.431336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:19.619    11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:19.619    11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:19.619    11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:19.619    11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.619    11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:19.619    "name": "raid_bdev1",
00:09:19.619    "uuid": "d4dbe72c-b9fc-4e05-a89a-0104adc45bb3",
00:09:19.619    "strip_size_kb": 0,
00:09:19.619    "state": "online",
00:09:19.619    "raid_level": "raid1",
00:09:19.619    "superblock": true,
00:09:19.619    "num_base_bdevs": 2,
00:09:19.619    "num_base_bdevs_discovered": 2,
00:09:19.619    "num_base_bdevs_operational": 2,
00:09:19.619    "base_bdevs_list": [
00:09:19.619      {
00:09:19.619        "name": "BaseBdev1",
00:09:19.619        "uuid": "2bb286f0-7482-5fb9-96c0-96f5cdbc68fa",
00:09:19.619        "is_configured": true,
00:09:19.619        "data_offset": 2048,
00:09:19.619        "data_size": 63488
00:09:19.619      },
00:09:19.619      {
00:09:19.619        "name": "BaseBdev2",
00:09:19.619        "uuid": "69d90ec8-6444-5f84-9950-4990a13515e8",
00:09:19.619        "is_configured": true,
00:09:19.619        "data_offset": 2048,
00:09:19.619        "data_size": 63488
00:09:19.619      }
00:09:19.619    ]
00:09:19.619  }'
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:19.619   11:30:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:19.902   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:09:19.902   11:30:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:09:20.161  [2024-12-16 11:30:46.031731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]]
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]]
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:21.100    11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:21.100    11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:21.100    11:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:21.100    11:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:21.100    11:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:21.100    "name": "raid_bdev1",
00:09:21.100    "uuid": "d4dbe72c-b9fc-4e05-a89a-0104adc45bb3",
00:09:21.100    "strip_size_kb": 0,
00:09:21.100    "state": "online",
00:09:21.100    "raid_level": "raid1",
00:09:21.100    "superblock": true,
00:09:21.100    "num_base_bdevs": 2,
00:09:21.100    "num_base_bdevs_discovered": 2,
00:09:21.100    "num_base_bdevs_operational": 2,
00:09:21.100    "base_bdevs_list": [
00:09:21.100      {
00:09:21.100        "name": "BaseBdev1",
00:09:21.100        "uuid": "2bb286f0-7482-5fb9-96c0-96f5cdbc68fa",
00:09:21.100        "is_configured": true,
00:09:21.100        "data_offset": 2048,
00:09:21.100        "data_size": 63488
00:09:21.100      },
00:09:21.100      {
00:09:21.100        "name": "BaseBdev2",
00:09:21.100        "uuid": "69d90ec8-6444-5f84-9950-4990a13515e8",
00:09:21.100        "is_configured": true,
00:09:21.100        "data_offset": 2048,
00:09:21.100        "data_size": 63488
00:09:21.100      }
00:09:21.100    ]
00:09:21.100  }'
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:21.100   11:30:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:21.360  [2024-12-16 11:30:47.319995] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:21.360  [2024-12-16 11:30:47.320032] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:21.360  [2024-12-16 11:30:47.323124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:21.360  [2024-12-16 11:30:47.323171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:21.360  [2024-12-16 11:30:47.323282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:21.360  [2024-12-16 11:30:47.323296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:09:21.360  {
00:09:21.360    "results": [
00:09:21.360      {
00:09:21.360        "job": "raid_bdev1",
00:09:21.360        "core_mask": "0x1",
00:09:21.360        "workload": "randrw",
00:09:21.360        "percentage": 50,
00:09:21.360        "status": "finished",
00:09:21.360        "queue_depth": 1,
00:09:21.360        "io_size": 131072,
00:09:21.360        "runtime": 1.288559,
00:09:21.360        "iops": 15957.360120879215,
00:09:21.360        "mibps": 1994.670015109902,
00:09:21.360        "io_failed": 0,
00:09:21.360        "io_timeout": 0,
00:09:21.360        "avg_latency_us": 59.39829379586459,
00:09:21.360        "min_latency_us": 29.512663755458515,
00:09:21.360        "max_latency_us": 1802.955458515284
00:09:21.360      }
00:09:21.360    ],
00:09:21.360    "core_count": 1
00:09:21.360  }
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75046
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75046 ']'
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75046
00:09:21.360    11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:21.360    11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75046
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:21.360  killing process with pid 75046
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75046'
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75046
00:09:21.360   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75046
00:09:21.360  [2024-12-16 11:30:47.367371] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:21.360  [2024-12-16 11:30:47.384228] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:21.619    11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nlH3paw4ra
00:09:21.619    11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:09:21.619    11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:09:21.619  ************************************
00:09:21.619  END TEST raid_read_error_test
00:09:21.619  ************************************
00:09:21.619   11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00
00:09:21.619   11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1
00:09:21.619   11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:21.619   11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0
00:09:21.619   11:30:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]]
00:09:21.619  
00:09:21.619  real	0m3.331s
00:09:21.619  user	0m4.284s
00:09:21.619  sys	0m0.536s
00:09:21.619   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:21.619   11:30:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:21.879   11:30:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write
00:09:21.879   11:30:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:21.879   11:30:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:21.879   11:30:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:21.879  ************************************
00:09:21.879  START TEST raid_write_error_test
00:09:21.879  ************************************
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']'
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0
00:09:21.879    11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8hENaPB2xu
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75175
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75175
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75175 ']'
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:21.879  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:21.879   11:30:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:09:21.879  [2024-12-16 11:30:47.803738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:21.879  [2024-12-16 11:30:47.803885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75175 ]
00:09:22.139  [2024-12-16 11:30:47.959195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:22.139  [2024-12-16 11:30:48.033824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:22.139  [2024-12-16 11:30:48.094503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:22.139  [2024-12-16 11:30:48.094569] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.708  BaseBdev1_malloc
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.708  true
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.708  [2024-12-16 11:30:48.727086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:09:22.708  [2024-12-16 11:30:48.727153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:22.708  [2024-12-16 11:30:48.727180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:09:22.708  [2024-12-16 11:30:48.727191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:22.708  [2024-12-16 11:30:48.729734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:22.708  [2024-12-16 11:30:48.729834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:09:22.708  BaseBdev1
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.708  BaseBdev2_malloc
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.708   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.709  true
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.709  [2024-12-16 11:30:48.764633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:09:22.709  [2024-12-16 11:30:48.764694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:22.709  [2024-12-16 11:30:48.764717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:09:22.709  [2024-12-16 11:30:48.764728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:22.709  [2024-12-16 11:30:48.767181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:22.709  [2024-12-16 11:30:48.767223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:09:22.709  BaseBdev2
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.709   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.709  [2024-12-16 11:30:48.772667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:22.968  [2024-12-16 11:30:48.774894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:22.968  [2024-12-16 11:30:48.775095] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:22.968  [2024-12-16 11:30:48.775111] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:09:22.968  [2024-12-16 11:30:48.775430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:09:22.968  [2024-12-16 11:30:48.775620] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:22.968  [2024-12-16 11:30:48.775644] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:09:22.968  [2024-12-16 11:30:48.775802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:22.968    11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:22.968    11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:22.968    11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:22.968    11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:22.968    11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:22.968    "name": "raid_bdev1",
00:09:22.968    "uuid": "2dbe74bb-f4a3-49c5-858b-cf2951e05eb9",
00:09:22.968    "strip_size_kb": 0,
00:09:22.968    "state": "online",
00:09:22.968    "raid_level": "raid1",
00:09:22.968    "superblock": true,
00:09:22.968    "num_base_bdevs": 2,
00:09:22.968    "num_base_bdevs_discovered": 2,
00:09:22.968    "num_base_bdevs_operational": 2,
00:09:22.968    "base_bdevs_list": [
00:09:22.968      {
00:09:22.968        "name": "BaseBdev1",
00:09:22.968        "uuid": "cae8c5eb-79b1-5ea0-9419-7324a94c6fad",
00:09:22.968        "is_configured": true,
00:09:22.968        "data_offset": 2048,
00:09:22.968        "data_size": 63488
00:09:22.968      },
00:09:22.968      {
00:09:22.968        "name": "BaseBdev2",
00:09:22.968        "uuid": "ba7e6c44-7cd7-5cb2-8712-0485cc998e9d",
00:09:22.968        "is_configured": true,
00:09:22.968        "data_offset": 2048,
00:09:22.968        "data_size": 63488
00:09:22.968      }
00:09:22.968    ]
00:09:22.968  }'
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:22.968   11:30:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:23.228   11:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:09:23.228   11:30:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:09:23.488  [2024-12-16 11:30:49.332120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:24.426  [2024-12-16 11:30:50.249824] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1'
00:09:24.426  [2024-12-16 11:30:50.249893] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:24.426  [2024-12-16 11:30:50.250109] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]]
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]]
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:24.426   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:24.426    11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:24.426    11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:24.426    11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:24.426    11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:24.427    11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:24.427   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:24.427    "name": "raid_bdev1",
00:09:24.427    "uuid": "2dbe74bb-f4a3-49c5-858b-cf2951e05eb9",
00:09:24.427    "strip_size_kb": 0,
00:09:24.427    "state": "online",
00:09:24.427    "raid_level": "raid1",
00:09:24.427    "superblock": true,
00:09:24.427    "num_base_bdevs": 2,
00:09:24.427    "num_base_bdevs_discovered": 1,
00:09:24.427    "num_base_bdevs_operational": 1,
00:09:24.427    "base_bdevs_list": [
00:09:24.427      {
00:09:24.427        "name": null,
00:09:24.427        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:24.427        "is_configured": false,
00:09:24.427        "data_offset": 0,
00:09:24.427        "data_size": 63488
00:09:24.427      },
00:09:24.427      {
00:09:24.427        "name": "BaseBdev2",
00:09:24.427        "uuid": "ba7e6c44-7cd7-5cb2-8712-0485cc998e9d",
00:09:24.427        "is_configured": true,
00:09:24.427        "data_offset": 2048,
00:09:24.427        "data_size": 63488
00:09:24.427      }
00:09:24.427    ]
00:09:24.427  }'
00:09:24.427   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:24.427   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:24.686  [2024-12-16 11:30:50.724009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:24.686  [2024-12-16 11:30:50.724113] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:24.686  [2024-12-16 11:30:50.727227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:24.686  [2024-12-16 11:30:50.727336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:24.686  [2024-12-16 11:30:50.727424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:24.686  [2024-12-16 11:30:50.727496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:09:24.686  {
00:09:24.686    "results": [
00:09:24.686      {
00:09:24.686        "job": "raid_bdev1",
00:09:24.686        "core_mask": "0x1",
00:09:24.686        "workload": "randrw",
00:09:24.686        "percentage": 50,
00:09:24.686        "status": "finished",
00:09:24.686        "queue_depth": 1,
00:09:24.686        "io_size": 131072,
00:09:24.686        "runtime": 1.392381,
00:09:24.686        "iops": 18760.669673027714,
00:09:24.686        "mibps": 2345.0837091284643,
00:09:24.686        "io_failed": 0,
00:09:24.686        "io_timeout": 0,
00:09:24.686        "avg_latency_us": 50.07877513942806,
00:09:24.686        "min_latency_us": 27.72401746724891,
00:09:24.686        "max_latency_us": 1774.3371179039302
00:09:24.686      }
00:09:24.686    ],
00:09:24.686    "core_count": 1
00:09:24.686  }
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75175
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75175 ']'
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75175
00:09:24.686    11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:09:24.686   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:24.686    11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75175
00:09:24.945  killing process with pid 75175
00:09:24.945   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:24.945   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:24.945   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75175'
00:09:24.946   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75175
00:09:24.946   11:30:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75175
00:09:24.946  [2024-12-16 11:30:50.771046] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:24.946  [2024-12-16 11:30:50.787515] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:25.206    11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8hENaPB2xu
00:09:25.206    11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:09:25.206    11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:09:25.206   11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00
00:09:25.206   11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1
00:09:25.206  ************************************
00:09:25.206  END TEST raid_write_error_test
00:09:25.206  ************************************
00:09:25.206   11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:25.206   11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0
00:09:25.206   11:30:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]]
00:09:25.206  
00:09:25.206  real	0m3.339s
00:09:25.206  user	0m4.295s
00:09:25.206  sys	0m0.528s
00:09:25.206   11:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:25.206   11:30:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:25.206   11:30:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4}
00:09:25.206   11:30:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:09:25.206   11:30:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false
00:09:25.206   11:30:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:25.206   11:30:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:25.206   11:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:25.206  ************************************
00:09:25.206  START TEST raid_state_function_test
00:09:25.206  ************************************
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:25.207    11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:09:25.207  Process raid pid: 75308
00:09:25.207  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']'
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75308
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75308'
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75308
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75308 ']'
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:09:25.207   11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:25.207  [2024-12-16 11:30:51.192723] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:25.207  [2024-12-16 11:30:51.192943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:25.474  [2024-12-16 11:30:51.348306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:25.474  [2024-12-16 11:30:51.399019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:25.474  [2024-12-16 11:30:51.444487] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:25.474  [2024-12-16 11:30:51.444644] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:26.042   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:26.042   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:09:26.042   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:26.042   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.042   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.301  [2024-12-16 11:30:52.111677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:26.301  [2024-12-16 11:30:52.111796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:26.301  [2024-12-16 11:30:52.111855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:26.301  [2024-12-16 11:30:52.111886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:26.301  [2024-12-16 11:30:52.111917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:26.301  [2024-12-16 11:30:52.111947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:26.301   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:26.302   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:26.302    11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:26.302    11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:26.302    11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.302    11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.302    11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.302   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:26.302    "name": "Existed_Raid",
00:09:26.302    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:26.302    "strip_size_kb": 64,
00:09:26.302    "state": "configuring",
00:09:26.302    "raid_level": "raid0",
00:09:26.302    "superblock": false,
00:09:26.302    "num_base_bdevs": 3,
00:09:26.302    "num_base_bdevs_discovered": 0,
00:09:26.302    "num_base_bdevs_operational": 3,
00:09:26.302    "base_bdevs_list": [
00:09:26.302      {
00:09:26.302        "name": "BaseBdev1",
00:09:26.302        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:26.302        "is_configured": false,
00:09:26.302        "data_offset": 0,
00:09:26.302        "data_size": 0
00:09:26.302      },
00:09:26.302      {
00:09:26.302        "name": "BaseBdev2",
00:09:26.302        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:26.302        "is_configured": false,
00:09:26.302        "data_offset": 0,
00:09:26.302        "data_size": 0
00:09:26.302      },
00:09:26.302      {
00:09:26.302        "name": "BaseBdev3",
00:09:26.302        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:26.302        "is_configured": false,
00:09:26.302        "data_offset": 0,
00:09:26.302        "data_size": 0
00:09:26.302      }
00:09:26.302    ]
00:09:26.302  }'
00:09:26.302   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:26.302   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.562  [2024-12-16 11:30:52.578902] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:26.562  [2024-12-16 11:30:52.579009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.562  [2024-12-16 11:30:52.586926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:26.562  [2024-12-16 11:30:52.587014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:26.562  [2024-12-16 11:30:52.587047] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:26.562  [2024-12-16 11:30:52.587075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:26.562  [2024-12-16 11:30:52.587097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:26.562  [2024-12-16 11:30:52.587122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.562  [2024-12-16 11:30:52.604499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:26.562  BaseBdev1
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.562  [
00:09:26.562  {
00:09:26.562  "name": "BaseBdev1",
00:09:26.562  "aliases": [
00:09:26.562  "4cf9f072-081e-4f7a-a202-28ca417e437b"
00:09:26.562  ],
00:09:26.562  "product_name": "Malloc disk",
00:09:26.562  "block_size": 512,
00:09:26.562  "num_blocks": 65536,
00:09:26.562  "uuid": "4cf9f072-081e-4f7a-a202-28ca417e437b",
00:09:26.562  "assigned_rate_limits": {
00:09:26.562  "rw_ios_per_sec": 0,
00:09:26.562  "rw_mbytes_per_sec": 0,
00:09:26.562  "r_mbytes_per_sec": 0,
00:09:26.562  "w_mbytes_per_sec": 0
00:09:26.562  },
00:09:26.562  "claimed": true,
00:09:26.562  "claim_type": "exclusive_write",
00:09:26.562  "zoned": false,
00:09:26.562  "supported_io_types": {
00:09:26.562  "read": true,
00:09:26.562  "write": true,
00:09:26.562  "unmap": true,
00:09:26.562  "flush": true,
00:09:26.562  "reset": true,
00:09:26.562  "nvme_admin": false,
00:09:26.562  "nvme_io": false,
00:09:26.562  "nvme_io_md": false,
00:09:26.562  "write_zeroes": true,
00:09:26.562  "zcopy": true,
00:09:26.562  "get_zone_info": false,
00:09:26.562  "zone_management": false,
00:09:26.562  "zone_append": false,
00:09:26.562  "compare": false,
00:09:26.562  "compare_and_write": false,
00:09:26.562  "abort": true,
00:09:26.562  "seek_hole": false,
00:09:26.562  "seek_data": false,
00:09:26.562  "copy": true,
00:09:26.562  "nvme_iov_md": false
00:09:26.562  },
00:09:26.562  "memory_domains": [
00:09:26.562  {
00:09:26.562  "dma_device_id": "system",
00:09:26.562  "dma_device_type": 1
00:09:26.562  },
00:09:26.562  {
00:09:26.562  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:26.562  "dma_device_type": 2
00:09:26.562  }
00:09:26.562  ],
00:09:26.562  "driver_specific": {}
00:09:26.562  }
00:09:26.562  ]
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:26.562   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:26.822    11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:26.822    11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:26.822    11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:26.822    11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:26.822    11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:26.822   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:26.822    "name": "Existed_Raid",
00:09:26.822    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:26.822    "strip_size_kb": 64,
00:09:26.822    "state": "configuring",
00:09:26.822    "raid_level": "raid0",
00:09:26.822    "superblock": false,
00:09:26.822    "num_base_bdevs": 3,
00:09:26.822    "num_base_bdevs_discovered": 1,
00:09:26.822    "num_base_bdevs_operational": 3,
00:09:26.822    "base_bdevs_list": [
00:09:26.822      {
00:09:26.822        "name": "BaseBdev1",
00:09:26.822        "uuid": "4cf9f072-081e-4f7a-a202-28ca417e437b",
00:09:26.822        "is_configured": true,
00:09:26.822        "data_offset": 0,
00:09:26.822        "data_size": 65536
00:09:26.822      },
00:09:26.822      {
00:09:26.822        "name": "BaseBdev2",
00:09:26.822        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:26.822        "is_configured": false,
00:09:26.822        "data_offset": 0,
00:09:26.822        "data_size": 0
00:09:26.822      },
00:09:26.822      {
00:09:26.822        "name": "BaseBdev3",
00:09:26.822        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:26.822        "is_configured": false,
00:09:26.822        "data_offset": 0,
00:09:26.822        "data_size": 0
00:09:26.822      }
00:09:26.822    ]
00:09:26.822  }'
00:09:26.822   11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:26.822   11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.081  [2024-12-16 11:30:53.067779] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:27.081  [2024-12-16 11:30:53.067902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.081  [2024-12-16 11:30:53.075803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:27.081  [2024-12-16 11:30:53.078070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:27.081  [2024-12-16 11:30:53.078156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:27.081  [2024-12-16 11:30:53.078191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:27.081  [2024-12-16 11:30:53.078221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:27.081    11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:27.081    11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:27.081    11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:27.081    11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.081    11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:27.081    "name": "Existed_Raid",
00:09:27.081    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:27.081    "strip_size_kb": 64,
00:09:27.081    "state": "configuring",
00:09:27.081    "raid_level": "raid0",
00:09:27.081    "superblock": false,
00:09:27.081    "num_base_bdevs": 3,
00:09:27.081    "num_base_bdevs_discovered": 1,
00:09:27.081    "num_base_bdevs_operational": 3,
00:09:27.081    "base_bdevs_list": [
00:09:27.081      {
00:09:27.081        "name": "BaseBdev1",
00:09:27.081        "uuid": "4cf9f072-081e-4f7a-a202-28ca417e437b",
00:09:27.081        "is_configured": true,
00:09:27.081        "data_offset": 0,
00:09:27.081        "data_size": 65536
00:09:27.081      },
00:09:27.081      {
00:09:27.081        "name": "BaseBdev2",
00:09:27.081        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:27.081        "is_configured": false,
00:09:27.081        "data_offset": 0,
00:09:27.081        "data_size": 0
00:09:27.081      },
00:09:27.081      {
00:09:27.081        "name": "BaseBdev3",
00:09:27.081        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:27.081        "is_configured": false,
00:09:27.081        "data_offset": 0,
00:09:27.081        "data_size": 0
00:09:27.081      }
00:09:27.081    ]
00:09:27.081  }'
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:27.081   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.648  [2024-12-16 11:30:53.559451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:27.648  BaseBdev2
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:27.648   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.648  [
00:09:27.648  {
00:09:27.648  "name": "BaseBdev2",
00:09:27.648  "aliases": [
00:09:27.648  "17ac3fd0-16af-4e3a-9573-b00957d8dfb9"
00:09:27.648  ],
00:09:27.648  "product_name": "Malloc disk",
00:09:27.648  "block_size": 512,
00:09:27.648  "num_blocks": 65536,
00:09:27.648  "uuid": "17ac3fd0-16af-4e3a-9573-b00957d8dfb9",
00:09:27.648  "assigned_rate_limits": {
00:09:27.648  "rw_ios_per_sec": 0,
00:09:27.648  "rw_mbytes_per_sec": 0,
00:09:27.648  "r_mbytes_per_sec": 0,
00:09:27.648  "w_mbytes_per_sec": 0
00:09:27.648  },
00:09:27.648  "claimed": true,
00:09:27.648  "claim_type": "exclusive_write",
00:09:27.648  "zoned": false,
00:09:27.648  "supported_io_types": {
00:09:27.648  "read": true,
00:09:27.648  "write": true,
00:09:27.648  "unmap": true,
00:09:27.648  "flush": true,
00:09:27.648  "reset": true,
00:09:27.648  "nvme_admin": false,
00:09:27.648  "nvme_io": false,
00:09:27.648  "nvme_io_md": false,
00:09:27.648  "write_zeroes": true,
00:09:27.648  "zcopy": true,
00:09:27.648  "get_zone_info": false,
00:09:27.648  "zone_management": false,
00:09:27.648  "zone_append": false,
00:09:27.648  "compare": false,
00:09:27.648  "compare_and_write": false,
00:09:27.648  "abort": true,
00:09:27.648  "seek_hole": false,
00:09:27.649  "seek_data": false,
00:09:27.649  "copy": true,
00:09:27.649  "nvme_iov_md": false
00:09:27.649  },
00:09:27.649  "memory_domains": [
00:09:27.649  {
00:09:27.649  "dma_device_id": "system",
00:09:27.649  "dma_device_type": 1
00:09:27.649  },
00:09:27.649  {
00:09:27.649  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:27.649  "dma_device_type": 2
00:09:27.649  }
00:09:27.649  ],
00:09:27.649  "driver_specific": {}
00:09:27.649  }
00:09:27.649  ]
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:27.649    11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:27.649    11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:27.649    11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:27.649    11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:27.649    11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:27.649    "name": "Existed_Raid",
00:09:27.649    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:27.649    "strip_size_kb": 64,
00:09:27.649    "state": "configuring",
00:09:27.649    "raid_level": "raid0",
00:09:27.649    "superblock": false,
00:09:27.649    "num_base_bdevs": 3,
00:09:27.649    "num_base_bdevs_discovered": 2,
00:09:27.649    "num_base_bdevs_operational": 3,
00:09:27.649    "base_bdevs_list": [
00:09:27.649      {
00:09:27.649        "name": "BaseBdev1",
00:09:27.649        "uuid": "4cf9f072-081e-4f7a-a202-28ca417e437b",
00:09:27.649        "is_configured": true,
00:09:27.649        "data_offset": 0,
00:09:27.649        "data_size": 65536
00:09:27.649      },
00:09:27.649      {
00:09:27.649        "name": "BaseBdev2",
00:09:27.649        "uuid": "17ac3fd0-16af-4e3a-9573-b00957d8dfb9",
00:09:27.649        "is_configured": true,
00:09:27.649        "data_offset": 0,
00:09:27.649        "data_size": 65536
00:09:27.649      },
00:09:27.649      {
00:09:27.649        "name": "BaseBdev3",
00:09:27.649        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:27.649        "is_configured": false,
00:09:27.649        "data_offset": 0,
00:09:27.649        "data_size": 0
00:09:27.649      }
00:09:27.649    ]
00:09:27.649  }'
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:27.649   11:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.217  [2024-12-16 11:30:54.050055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:28.217  [2024-12-16 11:30:54.050182] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:28.217  [2024-12-16 11:30:54.050216] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:09:28.217  [2024-12-16 11:30:54.050626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:09:28.217  [2024-12-16 11:30:54.050823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:28.217  [2024-12-16 11:30:54.050873] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:09:28.217  [2024-12-16 11:30:54.051126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:28.217  BaseBdev3
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.217  [
00:09:28.217  {
00:09:28.217  "name": "BaseBdev3",
00:09:28.217  "aliases": [
00:09:28.217  "117e92ad-5f5f-4802-8684-425e59500aaa"
00:09:28.217  ],
00:09:28.217  "product_name": "Malloc disk",
00:09:28.217  "block_size": 512,
00:09:28.217  "num_blocks": 65536,
00:09:28.217  "uuid": "117e92ad-5f5f-4802-8684-425e59500aaa",
00:09:28.217  "assigned_rate_limits": {
00:09:28.217  "rw_ios_per_sec": 0,
00:09:28.217  "rw_mbytes_per_sec": 0,
00:09:28.217  "r_mbytes_per_sec": 0,
00:09:28.217  "w_mbytes_per_sec": 0
00:09:28.217  },
00:09:28.217  "claimed": true,
00:09:28.217  "claim_type": "exclusive_write",
00:09:28.217  "zoned": false,
00:09:28.217  "supported_io_types": {
00:09:28.217  "read": true,
00:09:28.217  "write": true,
00:09:28.217  "unmap": true,
00:09:28.217  "flush": true,
00:09:28.217  "reset": true,
00:09:28.217  "nvme_admin": false,
00:09:28.217  "nvme_io": false,
00:09:28.217  "nvme_io_md": false,
00:09:28.217  "write_zeroes": true,
00:09:28.217  "zcopy": true,
00:09:28.217  "get_zone_info": false,
00:09:28.217  "zone_management": false,
00:09:28.217  "zone_append": false,
00:09:28.217  "compare": false,
00:09:28.217  "compare_and_write": false,
00:09:28.217  "abort": true,
00:09:28.217  "seek_hole": false,
00:09:28.217  "seek_data": false,
00:09:28.217  "copy": true,
00:09:28.217  "nvme_iov_md": false
00:09:28.217  },
00:09:28.217  "memory_domains": [
00:09:28.217  {
00:09:28.217  "dma_device_id": "system",
00:09:28.217  "dma_device_type": 1
00:09:28.217  },
00:09:28.217  {
00:09:28.217  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:28.217  "dma_device_type": 2
00:09:28.217  }
00:09:28.217  ],
00:09:28.217  "driver_specific": {}
00:09:28.217  }
00:09:28.217  ]
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:28.217   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:28.217    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:28.218    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.218    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.218    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:28.218    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.218   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:28.218    "name": "Existed_Raid",
00:09:28.218    "uuid": "128f784c-5041-4e63-a060-ff4a5c398c63",
00:09:28.218    "strip_size_kb": 64,
00:09:28.218    "state": "online",
00:09:28.218    "raid_level": "raid0",
00:09:28.218    "superblock": false,
00:09:28.218    "num_base_bdevs": 3,
00:09:28.218    "num_base_bdevs_discovered": 3,
00:09:28.218    "num_base_bdevs_operational": 3,
00:09:28.218    "base_bdevs_list": [
00:09:28.218      {
00:09:28.218        "name": "BaseBdev1",
00:09:28.218        "uuid": "4cf9f072-081e-4f7a-a202-28ca417e437b",
00:09:28.218        "is_configured": true,
00:09:28.218        "data_offset": 0,
00:09:28.218        "data_size": 65536
00:09:28.218      },
00:09:28.218      {
00:09:28.218        "name": "BaseBdev2",
00:09:28.218        "uuid": "17ac3fd0-16af-4e3a-9573-b00957d8dfb9",
00:09:28.218        "is_configured": true,
00:09:28.218        "data_offset": 0,
00:09:28.218        "data_size": 65536
00:09:28.218      },
00:09:28.218      {
00:09:28.218        "name": "BaseBdev3",
00:09:28.218        "uuid": "117e92ad-5f5f-4802-8684-425e59500aaa",
00:09:28.218        "is_configured": true,
00:09:28.218        "data_offset": 0,
00:09:28.218        "data_size": 65536
00:09:28.218      }
00:09:28.218    ]
00:09:28.218  }'
00:09:28.218   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:28.218   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.476   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:09:28.476   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:09:28.477   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:28.477   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:28.477   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:28.477   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:28.477    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:28.477    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:09:28.477    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.477    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.477  [2024-12-16 11:30:54.533690] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:28.736    "name": "Existed_Raid",
00:09:28.736    "aliases": [
00:09:28.736      "128f784c-5041-4e63-a060-ff4a5c398c63"
00:09:28.736    ],
00:09:28.736    "product_name": "Raid Volume",
00:09:28.736    "block_size": 512,
00:09:28.736    "num_blocks": 196608,
00:09:28.736    "uuid": "128f784c-5041-4e63-a060-ff4a5c398c63",
00:09:28.736    "assigned_rate_limits": {
00:09:28.736      "rw_ios_per_sec": 0,
00:09:28.736      "rw_mbytes_per_sec": 0,
00:09:28.736      "r_mbytes_per_sec": 0,
00:09:28.736      "w_mbytes_per_sec": 0
00:09:28.736    },
00:09:28.736    "claimed": false,
00:09:28.736    "zoned": false,
00:09:28.736    "supported_io_types": {
00:09:28.736      "read": true,
00:09:28.736      "write": true,
00:09:28.736      "unmap": true,
00:09:28.736      "flush": true,
00:09:28.736      "reset": true,
00:09:28.736      "nvme_admin": false,
00:09:28.736      "nvme_io": false,
00:09:28.736      "nvme_io_md": false,
00:09:28.736      "write_zeroes": true,
00:09:28.736      "zcopy": false,
00:09:28.736      "get_zone_info": false,
00:09:28.736      "zone_management": false,
00:09:28.736      "zone_append": false,
00:09:28.736      "compare": false,
00:09:28.736      "compare_and_write": false,
00:09:28.736      "abort": false,
00:09:28.736      "seek_hole": false,
00:09:28.736      "seek_data": false,
00:09:28.736      "copy": false,
00:09:28.736      "nvme_iov_md": false
00:09:28.736    },
00:09:28.736    "memory_domains": [
00:09:28.736      {
00:09:28.736        "dma_device_id": "system",
00:09:28.736        "dma_device_type": 1
00:09:28.736      },
00:09:28.736      {
00:09:28.736        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:28.736        "dma_device_type": 2
00:09:28.736      },
00:09:28.736      {
00:09:28.736        "dma_device_id": "system",
00:09:28.736        "dma_device_type": 1
00:09:28.736      },
00:09:28.736      {
00:09:28.736        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:28.736        "dma_device_type": 2
00:09:28.736      },
00:09:28.736      {
00:09:28.736        "dma_device_id": "system",
00:09:28.736        "dma_device_type": 1
00:09:28.736      },
00:09:28.736      {
00:09:28.736        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:28.736        "dma_device_type": 2
00:09:28.736      }
00:09:28.736    ],
00:09:28.736    "driver_specific": {
00:09:28.736      "raid": {
00:09:28.736        "uuid": "128f784c-5041-4e63-a060-ff4a5c398c63",
00:09:28.736        "strip_size_kb": 64,
00:09:28.736        "state": "online",
00:09:28.736        "raid_level": "raid0",
00:09:28.736        "superblock": false,
00:09:28.736        "num_base_bdevs": 3,
00:09:28.736        "num_base_bdevs_discovered": 3,
00:09:28.736        "num_base_bdevs_operational": 3,
00:09:28.736        "base_bdevs_list": [
00:09:28.736          {
00:09:28.736            "name": "BaseBdev1",
00:09:28.736            "uuid": "4cf9f072-081e-4f7a-a202-28ca417e437b",
00:09:28.736            "is_configured": true,
00:09:28.736            "data_offset": 0,
00:09:28.736            "data_size": 65536
00:09:28.736          },
00:09:28.736          {
00:09:28.736            "name": "BaseBdev2",
00:09:28.736            "uuid": "17ac3fd0-16af-4e3a-9573-b00957d8dfb9",
00:09:28.736            "is_configured": true,
00:09:28.736            "data_offset": 0,
00:09:28.736            "data_size": 65536
00:09:28.736          },
00:09:28.736          {
00:09:28.736            "name": "BaseBdev3",
00:09:28.736            "uuid": "117e92ad-5f5f-4802-8684-425e59500aaa",
00:09:28.736            "is_configured": true,
00:09:28.736            "data_offset": 0,
00:09:28.736            "data_size": 65536
00:09:28.736          }
00:09:28.736        ]
00:09:28.736      }
00:09:28.736    }
00:09:28.736  }'
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:09:28.736  BaseBdev2
00:09:28.736  BaseBdev3'
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.736    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:28.736   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:09:28.737   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.737   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.737  [2024-12-16 11:30:54.789006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:28.737  [2024-12-16 11:30:54.789101] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:28.737  [2024-12-16 11:30:54.789184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:28.996    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:28.996    11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:28.996    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:28.996    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:28.996    11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:28.996    "name": "Existed_Raid",
00:09:28.996    "uuid": "128f784c-5041-4e63-a060-ff4a5c398c63",
00:09:28.996    "strip_size_kb": 64,
00:09:28.996    "state": "offline",
00:09:28.996    "raid_level": "raid0",
00:09:28.996    "superblock": false,
00:09:28.996    "num_base_bdevs": 3,
00:09:28.996    "num_base_bdevs_discovered": 2,
00:09:28.996    "num_base_bdevs_operational": 2,
00:09:28.996    "base_bdevs_list": [
00:09:28.996      {
00:09:28.996        "name": null,
00:09:28.996        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:28.996        "is_configured": false,
00:09:28.996        "data_offset": 0,
00:09:28.996        "data_size": 65536
00:09:28.996      },
00:09:28.996      {
00:09:28.996        "name": "BaseBdev2",
00:09:28.996        "uuid": "17ac3fd0-16af-4e3a-9573-b00957d8dfb9",
00:09:28.996        "is_configured": true,
00:09:28.996        "data_offset": 0,
00:09:28.996        "data_size": 65536
00:09:28.996      },
00:09:28.996      {
00:09:28.996        "name": "BaseBdev3",
00:09:28.996        "uuid": "117e92ad-5f5f-4802-8684-425e59500aaa",
00:09:28.996        "is_configured": true,
00:09:28.996        "data_offset": 0,
00:09:28.996        "data_size": 65536
00:09:28.996      }
00:09:28.996    ]
00:09:28.996  }'
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:28.996   11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.255  [2024-12-16 11:30:55.280439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:29.255   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:29.255    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.514  [2024-12-16 11:30:55.344261] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:09:29.514  [2024-12-16 11:30:55.344408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:29.514    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:29.514    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:09:29.514    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.514    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.514    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.514  BaseBdev2
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.514   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.514  [
00:09:29.514  {
00:09:29.514  "name": "BaseBdev2",
00:09:29.514  "aliases": [
00:09:29.514  "323cc2cf-da0d-4b81-9b57-7c52a7091c94"
00:09:29.514  ],
00:09:29.514  "product_name": "Malloc disk",
00:09:29.514  "block_size": 512,
00:09:29.514  "num_blocks": 65536,
00:09:29.514  "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:29.514  "assigned_rate_limits": {
00:09:29.514  "rw_ios_per_sec": 0,
00:09:29.514  "rw_mbytes_per_sec": 0,
00:09:29.514  "r_mbytes_per_sec": 0,
00:09:29.514  "w_mbytes_per_sec": 0
00:09:29.514  },
00:09:29.514  "claimed": false,
00:09:29.514  "zoned": false,
00:09:29.514  "supported_io_types": {
00:09:29.514  "read": true,
00:09:29.514  "write": true,
00:09:29.514  "unmap": true,
00:09:29.514  "flush": true,
00:09:29.514  "reset": true,
00:09:29.514  "nvme_admin": false,
00:09:29.514  "nvme_io": false,
00:09:29.514  "nvme_io_md": false,
00:09:29.514  "write_zeroes": true,
00:09:29.514  "zcopy": true,
00:09:29.514  "get_zone_info": false,
00:09:29.514  "zone_management": false,
00:09:29.514  "zone_append": false,
00:09:29.514  "compare": false,
00:09:29.514  "compare_and_write": false,
00:09:29.514  "abort": true,
00:09:29.514  "seek_hole": false,
00:09:29.514  "seek_data": false,
00:09:29.514  "copy": true,
00:09:29.514  "nvme_iov_md": false
00:09:29.514  },
00:09:29.514  "memory_domains": [
00:09:29.514  {
00:09:29.515  "dma_device_id": "system",
00:09:29.515  "dma_device_type": 1
00:09:29.515  },
00:09:29.515  {
00:09:29.515  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:29.515  "dma_device_type": 2
00:09:29.515  }
00:09:29.515  ],
00:09:29.515  "driver_specific": {}
00:09:29.515  }
00:09:29.515  ]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.515  BaseBdev3
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.515  [
00:09:29.515  {
00:09:29.515  "name": "BaseBdev3",
00:09:29.515  "aliases": [
00:09:29.515  "048a9e2c-c924-41ab-9061-6ecfe46a75a7"
00:09:29.515  ],
00:09:29.515  "product_name": "Malloc disk",
00:09:29.515  "block_size": 512,
00:09:29.515  "num_blocks": 65536,
00:09:29.515  "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:29.515  "assigned_rate_limits": {
00:09:29.515  "rw_ios_per_sec": 0,
00:09:29.515  "rw_mbytes_per_sec": 0,
00:09:29.515  "r_mbytes_per_sec": 0,
00:09:29.515  "w_mbytes_per_sec": 0
00:09:29.515  },
00:09:29.515  "claimed": false,
00:09:29.515  "zoned": false,
00:09:29.515  "supported_io_types": {
00:09:29.515  "read": true,
00:09:29.515  "write": true,
00:09:29.515  "unmap": true,
00:09:29.515  "flush": true,
00:09:29.515  "reset": true,
00:09:29.515  "nvme_admin": false,
00:09:29.515  "nvme_io": false,
00:09:29.515  "nvme_io_md": false,
00:09:29.515  "write_zeroes": true,
00:09:29.515  "zcopy": true,
00:09:29.515  "get_zone_info": false,
00:09:29.515  "zone_management": false,
00:09:29.515  "zone_append": false,
00:09:29.515  "compare": false,
00:09:29.515  "compare_and_write": false,
00:09:29.515  "abort": true,
00:09:29.515  "seek_hole": false,
00:09:29.515  "seek_data": false,
00:09:29.515  "copy": true,
00:09:29.515  "nvme_iov_md": false
00:09:29.515  },
00:09:29.515  "memory_domains": [
00:09:29.515  {
00:09:29.515  "dma_device_id": "system",
00:09:29.515  "dma_device_type": 1
00:09:29.515  },
00:09:29.515  {
00:09:29.515  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:29.515  "dma_device_type": 2
00:09:29.515  }
00:09:29.515  ],
00:09:29.515  "driver_specific": {}
00:09:29.515  }
00:09:29.515  ]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.515  [2024-12-16 11:30:55.518772] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:29.515  [2024-12-16 11:30:55.518885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:29.515  [2024-12-16 11:30:55.518941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:29.515  [2024-12-16 11:30:55.521153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:29.515    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:29.515    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:29.515    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:29.515    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:29.515    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:29.515    "name": "Existed_Raid",
00:09:29.515    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:29.515    "strip_size_kb": 64,
00:09:29.515    "state": "configuring",
00:09:29.515    "raid_level": "raid0",
00:09:29.515    "superblock": false,
00:09:29.515    "num_base_bdevs": 3,
00:09:29.515    "num_base_bdevs_discovered": 2,
00:09:29.515    "num_base_bdevs_operational": 3,
00:09:29.515    "base_bdevs_list": [
00:09:29.515      {
00:09:29.515        "name": "BaseBdev1",
00:09:29.515        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:29.515        "is_configured": false,
00:09:29.515        "data_offset": 0,
00:09:29.515        "data_size": 0
00:09:29.515      },
00:09:29.515      {
00:09:29.515        "name": "BaseBdev2",
00:09:29.515        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:29.515        "is_configured": true,
00:09:29.515        "data_offset": 0,
00:09:29.515        "data_size": 65536
00:09:29.515      },
00:09:29.515      {
00:09:29.515        "name": "BaseBdev3",
00:09:29.515        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:29.515        "is_configured": true,
00:09:29.515        "data_offset": 0,
00:09:29.515        "data_size": 65536
00:09:29.515      }
00:09:29.515    ]
00:09:29.515  }'
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:29.515   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.082  [2024-12-16 11:30:55.973993] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:30.082   11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:30.082    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:30.082    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:30.082    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.082    11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:30.082    11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:30.082   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:30.082    "name": "Existed_Raid",
00:09:30.082    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:30.082    "strip_size_kb": 64,
00:09:30.082    "state": "configuring",
00:09:30.082    "raid_level": "raid0",
00:09:30.082    "superblock": false,
00:09:30.082    "num_base_bdevs": 3,
00:09:30.082    "num_base_bdevs_discovered": 1,
00:09:30.082    "num_base_bdevs_operational": 3,
00:09:30.082    "base_bdevs_list": [
00:09:30.082      {
00:09:30.082        "name": "BaseBdev1",
00:09:30.082        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:30.082        "is_configured": false,
00:09:30.082        "data_offset": 0,
00:09:30.082        "data_size": 0
00:09:30.082      },
00:09:30.082      {
00:09:30.082        "name": null,
00:09:30.082        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:30.082        "is_configured": false,
00:09:30.082        "data_offset": 0,
00:09:30.082        "data_size": 65536
00:09:30.082      },
00:09:30.082      {
00:09:30.082        "name": "BaseBdev3",
00:09:30.082        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:30.082        "is_configured": true,
00:09:30.082        "data_offset": 0,
00:09:30.082        "data_size": 65536
00:09:30.082      }
00:09:30.082    ]
00:09:30.082  }'
00:09:30.082   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:30.082   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.341    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:30.341    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:09:30.341    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:30.341    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.600    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.600  BaseBdev1
00:09:30.600  [2024-12-16 11:30:56.444770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.600  [
00:09:30.600  {
00:09:30.600  "name": "BaseBdev1",
00:09:30.600  "aliases": [
00:09:30.600  "759b33e7-ec33-4329-8729-67cf71ea4f52"
00:09:30.600  ],
00:09:30.600  "product_name": "Malloc disk",
00:09:30.600  "block_size": 512,
00:09:30.600  "num_blocks": 65536,
00:09:30.600  "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:30.600  "assigned_rate_limits": {
00:09:30.600  "rw_ios_per_sec": 0,
00:09:30.600  "rw_mbytes_per_sec": 0,
00:09:30.600  "r_mbytes_per_sec": 0,
00:09:30.600  "w_mbytes_per_sec": 0
00:09:30.600  },
00:09:30.600  "claimed": true,
00:09:30.600  "claim_type": "exclusive_write",
00:09:30.600  "zoned": false,
00:09:30.600  "supported_io_types": {
00:09:30.600  "read": true,
00:09:30.600  "write": true,
00:09:30.600  "unmap": true,
00:09:30.600  "flush": true,
00:09:30.600  "reset": true,
00:09:30.600  "nvme_admin": false,
00:09:30.600  "nvme_io": false,
00:09:30.600  "nvme_io_md": false,
00:09:30.600  "write_zeroes": true,
00:09:30.600  "zcopy": true,
00:09:30.600  "get_zone_info": false,
00:09:30.600  "zone_management": false,
00:09:30.600  "zone_append": false,
00:09:30.600  "compare": false,
00:09:30.600  "compare_and_write": false,
00:09:30.600  "abort": true,
00:09:30.600  "seek_hole": false,
00:09:30.600  "seek_data": false,
00:09:30.600  "copy": true,
00:09:30.600  "nvme_iov_md": false
00:09:30.600  },
00:09:30.600  "memory_domains": [
00:09:30.600  {
00:09:30.600  "dma_device_id": "system",
00:09:30.600  "dma_device_type": 1
00:09:30.600  },
00:09:30.600  {
00:09:30.600  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:30.600  "dma_device_type": 2
00:09:30.600  }
00:09:30.600  ],
00:09:30.600  "driver_specific": {}
00:09:30.600  }
00:09:30.600  ]
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:30.600   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:30.601   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:30.601   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:30.601   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:30.601    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:30.601    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:30.601    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:30.601    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:30.601    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:30.601   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:30.601    "name": "Existed_Raid",
00:09:30.601    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:30.601    "strip_size_kb": 64,
00:09:30.601    "state": "configuring",
00:09:30.601    "raid_level": "raid0",
00:09:30.601    "superblock": false,
00:09:30.601    "num_base_bdevs": 3,
00:09:30.601    "num_base_bdevs_discovered": 2,
00:09:30.601    "num_base_bdevs_operational": 3,
00:09:30.601    "base_bdevs_list": [
00:09:30.601      {
00:09:30.601        "name": "BaseBdev1",
00:09:30.601        "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:30.601        "is_configured": true,
00:09:30.601        "data_offset": 0,
00:09:30.601        "data_size": 65536
00:09:30.601      },
00:09:30.601      {
00:09:30.601        "name": null,
00:09:30.601        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:30.601        "is_configured": false,
00:09:30.601        "data_offset": 0,
00:09:30.601        "data_size": 65536
00:09:30.601      },
00:09:30.601      {
00:09:30.601        "name": "BaseBdev3",
00:09:30.601        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:30.601        "is_configured": true,
00:09:30.601        "data_offset": 0,
00:09:30.601        "data_size": 65536
00:09:30.601      }
00:09:30.601    ]
00:09:30.601  }'
00:09:30.601   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:30.601   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.170  [2024-12-16 11:30:56.976043] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:31.170   11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.170    11:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.170    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:31.170   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:31.170    "name": "Existed_Raid",
00:09:31.170    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:31.170    "strip_size_kb": 64,
00:09:31.170    "state": "configuring",
00:09:31.170    "raid_level": "raid0",
00:09:31.170    "superblock": false,
00:09:31.170    "num_base_bdevs": 3,
00:09:31.170    "num_base_bdevs_discovered": 1,
00:09:31.170    "num_base_bdevs_operational": 3,
00:09:31.170    "base_bdevs_list": [
00:09:31.170      {
00:09:31.170        "name": "BaseBdev1",
00:09:31.170        "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:31.170        "is_configured": true,
00:09:31.170        "data_offset": 0,
00:09:31.170        "data_size": 65536
00:09:31.170      },
00:09:31.170      {
00:09:31.170        "name": null,
00:09:31.170        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:31.170        "is_configured": false,
00:09:31.171        "data_offset": 0,
00:09:31.171        "data_size": 65536
00:09:31.171      },
00:09:31.171      {
00:09:31.171        "name": null,
00:09:31.171        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:31.171        "is_configured": false,
00:09:31.171        "data_offset": 0,
00:09:31.171        "data_size": 65536
00:09:31.171      }
00:09:31.171    ]
00:09:31.171  }'
00:09:31.171   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:31.171   11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.432    11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:09:31.432    11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:31.432    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.432    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.432    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:31.432   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:09:31.432   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:09:31.432   11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.432   11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.691  [2024-12-16 11:30:57.499295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:31.691    11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:31.691    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.691    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.691    11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:31.691    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:31.691    "name": "Existed_Raid",
00:09:31.691    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:31.691    "strip_size_kb": 64,
00:09:31.691    "state": "configuring",
00:09:31.691    "raid_level": "raid0",
00:09:31.691    "superblock": false,
00:09:31.691    "num_base_bdevs": 3,
00:09:31.691    "num_base_bdevs_discovered": 2,
00:09:31.691    "num_base_bdevs_operational": 3,
00:09:31.691    "base_bdevs_list": [
00:09:31.691      {
00:09:31.691        "name": "BaseBdev1",
00:09:31.691        "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:31.691        "is_configured": true,
00:09:31.691        "data_offset": 0,
00:09:31.691        "data_size": 65536
00:09:31.691      },
00:09:31.691      {
00:09:31.691        "name": null,
00:09:31.691        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:31.691        "is_configured": false,
00:09:31.691        "data_offset": 0,
00:09:31.691        "data_size": 65536
00:09:31.691      },
00:09:31.691      {
00:09:31.691        "name": "BaseBdev3",
00:09:31.691        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:31.691        "is_configured": true,
00:09:31.691        "data_offset": 0,
00:09:31.691        "data_size": 65536
00:09:31.691      }
00:09:31.691    ]
00:09:31.691  }'
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:31.691   11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.950    11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:31.950    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.950    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.950    11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:09:31.950    11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:31.950   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:09:31.950   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:09:31.950   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:31.950   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:31.950  [2024-12-16 11:30:58.014483] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:32.208   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:32.209   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:32.209   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:32.209   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:32.209    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:32.209    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:32.209    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:32.209    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.209    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:32.209   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:32.209    "name": "Existed_Raid",
00:09:32.209    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:32.209    "strip_size_kb": 64,
00:09:32.209    "state": "configuring",
00:09:32.209    "raid_level": "raid0",
00:09:32.209    "superblock": false,
00:09:32.209    "num_base_bdevs": 3,
00:09:32.209    "num_base_bdevs_discovered": 1,
00:09:32.209    "num_base_bdevs_operational": 3,
00:09:32.209    "base_bdevs_list": [
00:09:32.209      {
00:09:32.209        "name": null,
00:09:32.209        "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:32.209        "is_configured": false,
00:09:32.209        "data_offset": 0,
00:09:32.209        "data_size": 65536
00:09:32.209      },
00:09:32.209      {
00:09:32.209        "name": null,
00:09:32.209        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:32.209        "is_configured": false,
00:09:32.209        "data_offset": 0,
00:09:32.209        "data_size": 65536
00:09:32.209      },
00:09:32.209      {
00:09:32.209        "name": "BaseBdev3",
00:09:32.209        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:32.209        "is_configured": true,
00:09:32.209        "data_offset": 0,
00:09:32.209        "data_size": 65536
00:09:32.209      }
00:09:32.209    ]
00:09:32.209  }'
00:09:32.209   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:32.209   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.468    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:32.468    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:09:32.468    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:32.468    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.468    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:32.468   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:09:32.468   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:09:32.468   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:32.468   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.468  [2024-12-16 11:30:58.528561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:32.468   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:32.468   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:32.728    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:32.728    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:32.728    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:32.728    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.728    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:32.728    "name": "Existed_Raid",
00:09:32.728    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:32.728    "strip_size_kb": 64,
00:09:32.728    "state": "configuring",
00:09:32.728    "raid_level": "raid0",
00:09:32.728    "superblock": false,
00:09:32.728    "num_base_bdevs": 3,
00:09:32.728    "num_base_bdevs_discovered": 2,
00:09:32.728    "num_base_bdevs_operational": 3,
00:09:32.728    "base_bdevs_list": [
00:09:32.728      {
00:09:32.728        "name": null,
00:09:32.728        "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:32.728        "is_configured": false,
00:09:32.728        "data_offset": 0,
00:09:32.728        "data_size": 65536
00:09:32.728      },
00:09:32.728      {
00:09:32.728        "name": "BaseBdev2",
00:09:32.728        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:32.728        "is_configured": true,
00:09:32.728        "data_offset": 0,
00:09:32.728        "data_size": 65536
00:09:32.728      },
00:09:32.728      {
00:09:32.728        "name": "BaseBdev3",
00:09:32.728        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:32.728        "is_configured": true,
00:09:32.728        "data_offset": 0,
00:09:32.728        "data_size": 65536
00:09:32.728      }
00:09:32.728    ]
00:09:32.728  }'
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:32.728   11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.987    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:32.987    11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:09:32.987    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:32.987    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.987    11:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:32.987   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:09:32.987    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:32.987    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:09:32.987    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:32.987    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:32.988    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 759b33e7-ec33-4329-8729-67cf71ea4f52
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.247  [2024-12-16 11:30:59.078952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:09:33.247  [2024-12-16 11:30:59.079066] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:09:33.247  [2024-12-16 11:30:59.079099] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:09:33.247  [2024-12-16 11:30:59.079458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:09:33.247  [2024-12-16 11:30:59.079653] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:09:33.247  [2024-12-16 11:30:59.079701] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:09:33.247  [2024-12-16 11:30:59.079946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:33.247  NewBaseBdev
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.247  [
00:09:33.247  {
00:09:33.247  "name": "NewBaseBdev",
00:09:33.247  "aliases": [
00:09:33.247  "759b33e7-ec33-4329-8729-67cf71ea4f52"
00:09:33.247  ],
00:09:33.247  "product_name": "Malloc disk",
00:09:33.247  "block_size": 512,
00:09:33.247  "num_blocks": 65536,
00:09:33.247  "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:33.247  "assigned_rate_limits": {
00:09:33.247  "rw_ios_per_sec": 0,
00:09:33.247  "rw_mbytes_per_sec": 0,
00:09:33.247  "r_mbytes_per_sec": 0,
00:09:33.247  "w_mbytes_per_sec": 0
00:09:33.247  },
00:09:33.247  "claimed": true,
00:09:33.247  "claim_type": "exclusive_write",
00:09:33.247  "zoned": false,
00:09:33.247  "supported_io_types": {
00:09:33.247  "read": true,
00:09:33.247  "write": true,
00:09:33.247  "unmap": true,
00:09:33.247  "flush": true,
00:09:33.247  "reset": true,
00:09:33.247  "nvme_admin": false,
00:09:33.247  "nvme_io": false,
00:09:33.247  "nvme_io_md": false,
00:09:33.247  "write_zeroes": true,
00:09:33.247  "zcopy": true,
00:09:33.247  "get_zone_info": false,
00:09:33.247  "zone_management": false,
00:09:33.247  "zone_append": false,
00:09:33.247  "compare": false,
00:09:33.247  "compare_and_write": false,
00:09:33.247  "abort": true,
00:09:33.247  "seek_hole": false,
00:09:33.247  "seek_data": false,
00:09:33.247  "copy": true,
00:09:33.247  "nvme_iov_md": false
00:09:33.247  },
00:09:33.247  "memory_domains": [
00:09:33.247  {
00:09:33.247  "dma_device_id": "system",
00:09:33.247  "dma_device_type": 1
00:09:33.247  },
00:09:33.247  {
00:09:33.247  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:33.247  "dma_device_type": 2
00:09:33.247  }
00:09:33.247  ],
00:09:33.247  "driver_specific": {}
00:09:33.247  }
00:09:33.247  ]
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:33.247   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:33.248    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:33.248    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.248    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.248    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:33.248    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:33.248    "name": "Existed_Raid",
00:09:33.248    "uuid": "c39c5bfe-de16-4a5d-9659-d27495e48f38",
00:09:33.248    "strip_size_kb": 64,
00:09:33.248    "state": "online",
00:09:33.248    "raid_level": "raid0",
00:09:33.248    "superblock": false,
00:09:33.248    "num_base_bdevs": 3,
00:09:33.248    "num_base_bdevs_discovered": 3,
00:09:33.248    "num_base_bdevs_operational": 3,
00:09:33.248    "base_bdevs_list": [
00:09:33.248      {
00:09:33.248        "name": "NewBaseBdev",
00:09:33.248        "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:33.248        "is_configured": true,
00:09:33.248        "data_offset": 0,
00:09:33.248        "data_size": 65536
00:09:33.248      },
00:09:33.248      {
00:09:33.248        "name": "BaseBdev2",
00:09:33.248        "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:33.248        "is_configured": true,
00:09:33.248        "data_offset": 0,
00:09:33.248        "data_size": 65536
00:09:33.248      },
00:09:33.248      {
00:09:33.248        "name": "BaseBdev3",
00:09:33.248        "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:33.248        "is_configured": true,
00:09:33.248        "data_offset": 0,
00:09:33.248        "data_size": 65536
00:09:33.248      }
00:09:33.248    ]
00:09:33.248  }'
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:33.248   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.508   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:09:33.508   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:09:33.508   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:33.508   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:33.508   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:33.508   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:33.508    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:09:33.508    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:33.508    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.508    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.508  [2024-12-16 11:30:59.538610] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:33.508    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:33.768    "name": "Existed_Raid",
00:09:33.768    "aliases": [
00:09:33.768      "c39c5bfe-de16-4a5d-9659-d27495e48f38"
00:09:33.768    ],
00:09:33.768    "product_name": "Raid Volume",
00:09:33.768    "block_size": 512,
00:09:33.768    "num_blocks": 196608,
00:09:33.768    "uuid": "c39c5bfe-de16-4a5d-9659-d27495e48f38",
00:09:33.768    "assigned_rate_limits": {
00:09:33.768      "rw_ios_per_sec": 0,
00:09:33.768      "rw_mbytes_per_sec": 0,
00:09:33.768      "r_mbytes_per_sec": 0,
00:09:33.768      "w_mbytes_per_sec": 0
00:09:33.768    },
00:09:33.768    "claimed": false,
00:09:33.768    "zoned": false,
00:09:33.768    "supported_io_types": {
00:09:33.768      "read": true,
00:09:33.768      "write": true,
00:09:33.768      "unmap": true,
00:09:33.768      "flush": true,
00:09:33.768      "reset": true,
00:09:33.768      "nvme_admin": false,
00:09:33.768      "nvme_io": false,
00:09:33.768      "nvme_io_md": false,
00:09:33.768      "write_zeroes": true,
00:09:33.768      "zcopy": false,
00:09:33.768      "get_zone_info": false,
00:09:33.768      "zone_management": false,
00:09:33.768      "zone_append": false,
00:09:33.768      "compare": false,
00:09:33.768      "compare_and_write": false,
00:09:33.768      "abort": false,
00:09:33.768      "seek_hole": false,
00:09:33.768      "seek_data": false,
00:09:33.768      "copy": false,
00:09:33.768      "nvme_iov_md": false
00:09:33.768    },
00:09:33.768    "memory_domains": [
00:09:33.768      {
00:09:33.768        "dma_device_id": "system",
00:09:33.768        "dma_device_type": 1
00:09:33.768      },
00:09:33.768      {
00:09:33.768        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:33.768        "dma_device_type": 2
00:09:33.768      },
00:09:33.768      {
00:09:33.768        "dma_device_id": "system",
00:09:33.768        "dma_device_type": 1
00:09:33.768      },
00:09:33.768      {
00:09:33.768        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:33.768        "dma_device_type": 2
00:09:33.768      },
00:09:33.768      {
00:09:33.768        "dma_device_id": "system",
00:09:33.768        "dma_device_type": 1
00:09:33.768      },
00:09:33.768      {
00:09:33.768        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:33.768        "dma_device_type": 2
00:09:33.768      }
00:09:33.768    ],
00:09:33.768    "driver_specific": {
00:09:33.768      "raid": {
00:09:33.768        "uuid": "c39c5bfe-de16-4a5d-9659-d27495e48f38",
00:09:33.768        "strip_size_kb": 64,
00:09:33.768        "state": "online",
00:09:33.768        "raid_level": "raid0",
00:09:33.768        "superblock": false,
00:09:33.768        "num_base_bdevs": 3,
00:09:33.768        "num_base_bdevs_discovered": 3,
00:09:33.768        "num_base_bdevs_operational": 3,
00:09:33.768        "base_bdevs_list": [
00:09:33.768          {
00:09:33.768            "name": "NewBaseBdev",
00:09:33.768            "uuid": "759b33e7-ec33-4329-8729-67cf71ea4f52",
00:09:33.768            "is_configured": true,
00:09:33.768            "data_offset": 0,
00:09:33.768            "data_size": 65536
00:09:33.768          },
00:09:33.768          {
00:09:33.768            "name": "BaseBdev2",
00:09:33.768            "uuid": "323cc2cf-da0d-4b81-9b57-7c52a7091c94",
00:09:33.768            "is_configured": true,
00:09:33.768            "data_offset": 0,
00:09:33.768            "data_size": 65536
00:09:33.768          },
00:09:33.768          {
00:09:33.768            "name": "BaseBdev3",
00:09:33.768            "uuid": "048a9e2c-c924-41ab-9061-6ecfe46a75a7",
00:09:33.768            "is_configured": true,
00:09:33.768            "data_offset": 0,
00:09:33.768            "data_size": 65536
00:09:33.768          }
00:09:33.768        ]
00:09:33.768      }
00:09:33.768    }
00:09:33.768  }'
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:09:33.768  BaseBdev2
00:09:33.768  BaseBdev3'
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:33.768   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.768    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:33.769  [2024-12-16 11:30:59.821801] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:33.769  [2024-12-16 11:30:59.821879] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:33.769  [2024-12-16 11:30:59.822006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:33.769  [2024-12-16 11:30:59.822100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:33.769  [2024-12-16 11:30:59.822171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75308
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75308 ']'
00:09:33.769   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75308
00:09:33.769    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:09:34.028   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:34.028    11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75308
00:09:34.028  killing process with pid 75308
00:09:34.028   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:34.028   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:34.028   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75308'
00:09:34.028   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75308
00:09:34.028   11:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75308
00:09:34.028  [2024-12-16 11:30:59.870220] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:34.028  [2024-12-16 11:30:59.903497] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:09:34.288  
00:09:34.288  real	0m9.067s
00:09:34.288  user	0m15.491s
00:09:34.288  sys	0m1.798s
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:34.288  ************************************
00:09:34.288  END TEST raid_state_function_test
00:09:34.288  ************************************
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:34.288   11:31:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true
00:09:34.288   11:31:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:34.288   11:31:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:34.288   11:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:34.288  ************************************
00:09:34.288  START TEST raid_state_function_test_sb
00:09:34.288  ************************************
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:34.288    11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']'
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75918
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:09:34.288  Process raid pid: 75918
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75918'
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75918
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75918 ']'
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:34.288  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:34.288   11:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:34.288  [2024-12-16 11:31:00.336606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:34.288  [2024-12-16 11:31:00.336835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:34.547  [2024-12-16 11:31:00.502871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:34.547  [2024-12-16 11:31:00.557847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:34.547  [2024-12-16 11:31:00.604169] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:34.547  [2024-12-16 11:31:00.604223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.486  [2024-12-16 11:31:01.259584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:35.486  [2024-12-16 11:31:01.259641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:35.486  [2024-12-16 11:31:01.259666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:35.486  [2024-12-16 11:31:01.259679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:35.486  [2024-12-16 11:31:01.259687] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:35.486  [2024-12-16 11:31:01.259702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:35.486   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:35.487    11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:35.487    11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:35.487    11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:35.487    11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.487    11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:35.487    "name": "Existed_Raid",
00:09:35.487    "uuid": "49c03735-12f2-4052-a4c9-b2b7faedc506",
00:09:35.487    "strip_size_kb": 64,
00:09:35.487    "state": "configuring",
00:09:35.487    "raid_level": "raid0",
00:09:35.487    "superblock": true,
00:09:35.487    "num_base_bdevs": 3,
00:09:35.487    "num_base_bdevs_discovered": 0,
00:09:35.487    "num_base_bdevs_operational": 3,
00:09:35.487    "base_bdevs_list": [
00:09:35.487      {
00:09:35.487        "name": "BaseBdev1",
00:09:35.487        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:35.487        "is_configured": false,
00:09:35.487        "data_offset": 0,
00:09:35.487        "data_size": 0
00:09:35.487      },
00:09:35.487      {
00:09:35.487        "name": "BaseBdev2",
00:09:35.487        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:35.487        "is_configured": false,
00:09:35.487        "data_offset": 0,
00:09:35.487        "data_size": 0
00:09:35.487      },
00:09:35.487      {
00:09:35.487        "name": "BaseBdev3",
00:09:35.487        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:35.487        "is_configured": false,
00:09:35.487        "data_offset": 0,
00:09:35.487        "data_size": 0
00:09:35.487      }
00:09:35.487    ]
00:09:35.487  }'
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:35.487   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.747  [2024-12-16 11:31:01.738662] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:35.747  [2024-12-16 11:31:01.738770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.747  [2024-12-16 11:31:01.750692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:35.747  [2024-12-16 11:31:01.750783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:35.747  [2024-12-16 11:31:01.750820] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:35.747  [2024-12-16 11:31:01.750861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:35.747  [2024-12-16 11:31:01.750897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:35.747  [2024-12-16 11:31:01.750933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.747  [2024-12-16 11:31:01.772341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:35.747  BaseBdev1
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:35.747   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:35.747  [
00:09:35.747  {
00:09:35.747  "name": "BaseBdev1",
00:09:35.747  "aliases": [
00:09:35.747  "87b981f9-2ee5-4173-b01b-cf9325f55136"
00:09:35.747  ],
00:09:35.747  "product_name": "Malloc disk",
00:09:35.747  "block_size": 512,
00:09:35.747  "num_blocks": 65536,
00:09:35.747  "uuid": "87b981f9-2ee5-4173-b01b-cf9325f55136",
00:09:35.747  "assigned_rate_limits": {
00:09:35.747  "rw_ios_per_sec": 0,
00:09:35.747  "rw_mbytes_per_sec": 0,
00:09:35.747  "r_mbytes_per_sec": 0,
00:09:35.747  "w_mbytes_per_sec": 0
00:09:35.747  },
00:09:35.747  "claimed": true,
00:09:35.747  "claim_type": "exclusive_write",
00:09:35.747  "zoned": false,
00:09:35.747  "supported_io_types": {
00:09:35.747  "read": true,
00:09:35.747  "write": true,
00:09:35.747  "unmap": true,
00:09:35.747  "flush": true,
00:09:35.747  "reset": true,
00:09:35.747  "nvme_admin": false,
00:09:35.747  "nvme_io": false,
00:09:35.747  "nvme_io_md": false,
00:09:35.747  "write_zeroes": true,
00:09:35.747  "zcopy": true,
00:09:35.747  "get_zone_info": false,
00:09:35.747  "zone_management": false,
00:09:35.747  "zone_append": false,
00:09:35.747  "compare": false,
00:09:35.747  "compare_and_write": false,
00:09:35.747  "abort": true,
00:09:35.747  "seek_hole": false,
00:09:35.747  "seek_data": false,
00:09:35.747  "copy": true,
00:09:35.747  "nvme_iov_md": false
00:09:35.747  },
00:09:35.747  "memory_domains": [
00:09:35.747  {
00:09:35.747  "dma_device_id": "system",
00:09:35.747  "dma_device_type": 1
00:09:35.747  },
00:09:35.747  {
00:09:35.747  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:36.007  "dma_device_type": 2
00:09:36.007  }
00:09:36.007  ],
00:09:36.007  "driver_specific": {}
00:09:36.007  }
00:09:36.007  ]
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:36.007    11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:36.007    11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.007    11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.007    11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:36.007    11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:36.007    "name": "Existed_Raid",
00:09:36.007    "uuid": "1ca148f1-c9ba-4ff6-b241-34ec70dd37fd",
00:09:36.007    "strip_size_kb": 64,
00:09:36.007    "state": "configuring",
00:09:36.007    "raid_level": "raid0",
00:09:36.007    "superblock": true,
00:09:36.007    "num_base_bdevs": 3,
00:09:36.007    "num_base_bdevs_discovered": 1,
00:09:36.007    "num_base_bdevs_operational": 3,
00:09:36.007    "base_bdevs_list": [
00:09:36.007      {
00:09:36.007        "name": "BaseBdev1",
00:09:36.007        "uuid": "87b981f9-2ee5-4173-b01b-cf9325f55136",
00:09:36.007        "is_configured": true,
00:09:36.007        "data_offset": 2048,
00:09:36.007        "data_size": 63488
00:09:36.007      },
00:09:36.007      {
00:09:36.007        "name": "BaseBdev2",
00:09:36.007        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:36.007        "is_configured": false,
00:09:36.007        "data_offset": 0,
00:09:36.007        "data_size": 0
00:09:36.007      },
00:09:36.007      {
00:09:36.007        "name": "BaseBdev3",
00:09:36.007        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:36.007        "is_configured": false,
00:09:36.007        "data_offset": 0,
00:09:36.007        "data_size": 0
00:09:36.007      }
00:09:36.007    ]
00:09:36.007  }'
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:36.007   11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.266  [2024-12-16 11:31:02.255632] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:36.266  [2024-12-16 11:31:02.255743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.266  [2024-12-16 11:31:02.267655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:36.266  [2024-12-16 11:31:02.269746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:36.266  [2024-12-16 11:31:02.269792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:36.266  [2024-12-16 11:31:02.269802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:36.266  [2024-12-16 11:31:02.269814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:36.266    11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:36.266    11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:36.266    11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.266    11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.266    11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:36.266    "name": "Existed_Raid",
00:09:36.266    "uuid": "e5a33852-030e-409f-96e7-d2ee8482dcd5",
00:09:36.266    "strip_size_kb": 64,
00:09:36.266    "state": "configuring",
00:09:36.266    "raid_level": "raid0",
00:09:36.266    "superblock": true,
00:09:36.266    "num_base_bdevs": 3,
00:09:36.266    "num_base_bdevs_discovered": 1,
00:09:36.266    "num_base_bdevs_operational": 3,
00:09:36.266    "base_bdevs_list": [
00:09:36.266      {
00:09:36.266        "name": "BaseBdev1",
00:09:36.266        "uuid": "87b981f9-2ee5-4173-b01b-cf9325f55136",
00:09:36.266        "is_configured": true,
00:09:36.266        "data_offset": 2048,
00:09:36.266        "data_size": 63488
00:09:36.266      },
00:09:36.266      {
00:09:36.266        "name": "BaseBdev2",
00:09:36.266        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:36.266        "is_configured": false,
00:09:36.266        "data_offset": 0,
00:09:36.266        "data_size": 0
00:09:36.266      },
00:09:36.266      {
00:09:36.266        "name": "BaseBdev3",
00:09:36.266        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:36.266        "is_configured": false,
00:09:36.266        "data_offset": 0,
00:09:36.266        "data_size": 0
00:09:36.266      }
00:09:36.266    ]
00:09:36.266  }'
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:36.266   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.835  [2024-12-16 11:31:02.779523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:36.835  BaseBdev2
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.835  [
00:09:36.835  {
00:09:36.835  "name": "BaseBdev2",
00:09:36.835  "aliases": [
00:09:36.835  "998427d6-76b7-4b4a-908c-7df6ce291a7b"
00:09:36.835  ],
00:09:36.835  "product_name": "Malloc disk",
00:09:36.835  "block_size": 512,
00:09:36.835  "num_blocks": 65536,
00:09:36.835  "uuid": "998427d6-76b7-4b4a-908c-7df6ce291a7b",
00:09:36.835  "assigned_rate_limits": {
00:09:36.835  "rw_ios_per_sec": 0,
00:09:36.835  "rw_mbytes_per_sec": 0,
00:09:36.835  "r_mbytes_per_sec": 0,
00:09:36.835  "w_mbytes_per_sec": 0
00:09:36.835  },
00:09:36.835  "claimed": true,
00:09:36.835  "claim_type": "exclusive_write",
00:09:36.835  "zoned": false,
00:09:36.835  "supported_io_types": {
00:09:36.835  "read": true,
00:09:36.835  "write": true,
00:09:36.835  "unmap": true,
00:09:36.835  "flush": true,
00:09:36.835  "reset": true,
00:09:36.835  "nvme_admin": false,
00:09:36.835  "nvme_io": false,
00:09:36.835  "nvme_io_md": false,
00:09:36.835  "write_zeroes": true,
00:09:36.835  "zcopy": true,
00:09:36.835  "get_zone_info": false,
00:09:36.835  "zone_management": false,
00:09:36.835  "zone_append": false,
00:09:36.835  "compare": false,
00:09:36.835  "compare_and_write": false,
00:09:36.835  "abort": true,
00:09:36.835  "seek_hole": false,
00:09:36.835  "seek_data": false,
00:09:36.835  "copy": true,
00:09:36.835  "nvme_iov_md": false
00:09:36.835  },
00:09:36.835  "memory_domains": [
00:09:36.835  {
00:09:36.835  "dma_device_id": "system",
00:09:36.835  "dma_device_type": 1
00:09:36.835  },
00:09:36.835  {
00:09:36.835  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:36.835  "dma_device_type": 2
00:09:36.835  }
00:09:36.835  ],
00:09:36.835  "driver_specific": {}
00:09:36.835  }
00:09:36.835  ]
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:36.835   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:36.835    11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:36.835    11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:36.835    11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:36.836    11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:36.836    11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:36.836   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:36.836    "name": "Existed_Raid",
00:09:36.836    "uuid": "e5a33852-030e-409f-96e7-d2ee8482dcd5",
00:09:36.836    "strip_size_kb": 64,
00:09:36.836    "state": "configuring",
00:09:36.836    "raid_level": "raid0",
00:09:36.836    "superblock": true,
00:09:36.836    "num_base_bdevs": 3,
00:09:36.836    "num_base_bdevs_discovered": 2,
00:09:36.836    "num_base_bdevs_operational": 3,
00:09:36.836    "base_bdevs_list": [
00:09:36.836      {
00:09:36.836        "name": "BaseBdev1",
00:09:36.836        "uuid": "87b981f9-2ee5-4173-b01b-cf9325f55136",
00:09:36.836        "is_configured": true,
00:09:36.836        "data_offset": 2048,
00:09:36.836        "data_size": 63488
00:09:36.836      },
00:09:36.836      {
00:09:36.836        "name": "BaseBdev2",
00:09:36.836        "uuid": "998427d6-76b7-4b4a-908c-7df6ce291a7b",
00:09:36.836        "is_configured": true,
00:09:36.836        "data_offset": 2048,
00:09:36.836        "data_size": 63488
00:09:36.836      },
00:09:36.836      {
00:09:36.836        "name": "BaseBdev3",
00:09:36.836        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:36.836        "is_configured": false,
00:09:36.836        "data_offset": 0,
00:09:36.836        "data_size": 0
00:09:36.836      }
00:09:36.836    ]
00:09:36.836  }'
00:09:36.836   11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:36.836   11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.405  [2024-12-16 11:31:03.258305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:37.405  [2024-12-16 11:31:03.258562] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:37.405  [2024-12-16 11:31:03.258595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:09:37.405  BaseBdev3
00:09:37.405  [2024-12-16 11:31:03.258974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:09:37.405  [2024-12-16 11:31:03.259126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:37.405  [2024-12-16 11:31:03.259139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:09:37.405  [2024-12-16 11:31:03.259306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.405  [
00:09:37.405  {
00:09:37.405  "name": "BaseBdev3",
00:09:37.405  "aliases": [
00:09:37.405  "f8304368-6195-4dc3-a9fd-4fdc710f7f55"
00:09:37.405  ],
00:09:37.405  "product_name": "Malloc disk",
00:09:37.405  "block_size": 512,
00:09:37.405  "num_blocks": 65536,
00:09:37.405  "uuid": "f8304368-6195-4dc3-a9fd-4fdc710f7f55",
00:09:37.405  "assigned_rate_limits": {
00:09:37.405  "rw_ios_per_sec": 0,
00:09:37.405  "rw_mbytes_per_sec": 0,
00:09:37.405  "r_mbytes_per_sec": 0,
00:09:37.405  "w_mbytes_per_sec": 0
00:09:37.405  },
00:09:37.405  "claimed": true,
00:09:37.405  "claim_type": "exclusive_write",
00:09:37.405  "zoned": false,
00:09:37.405  "supported_io_types": {
00:09:37.405  "read": true,
00:09:37.405  "write": true,
00:09:37.405  "unmap": true,
00:09:37.405  "flush": true,
00:09:37.405  "reset": true,
00:09:37.405  "nvme_admin": false,
00:09:37.405  "nvme_io": false,
00:09:37.405  "nvme_io_md": false,
00:09:37.405  "write_zeroes": true,
00:09:37.405  "zcopy": true,
00:09:37.405  "get_zone_info": false,
00:09:37.405  "zone_management": false,
00:09:37.405  "zone_append": false,
00:09:37.405  "compare": false,
00:09:37.405  "compare_and_write": false,
00:09:37.405  "abort": true,
00:09:37.405  "seek_hole": false,
00:09:37.405  "seek_data": false,
00:09:37.405  "copy": true,
00:09:37.405  "nvme_iov_md": false
00:09:37.405  },
00:09:37.405  "memory_domains": [
00:09:37.405  {
00:09:37.405  "dma_device_id": "system",
00:09:37.405  "dma_device_type": 1
00:09:37.405  },
00:09:37.405  {
00:09:37.405  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:37.405  "dma_device_type": 2
00:09:37.405  }
00:09:37.405  ],
00:09:37.405  "driver_specific": {}
00:09:37.405  }
00:09:37.405  ]
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:37.405    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:37.405    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:37.405    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.405    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.405    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:37.405    "name": "Existed_Raid",
00:09:37.405    "uuid": "e5a33852-030e-409f-96e7-d2ee8482dcd5",
00:09:37.405    "strip_size_kb": 64,
00:09:37.405    "state": "online",
00:09:37.405    "raid_level": "raid0",
00:09:37.405    "superblock": true,
00:09:37.405    "num_base_bdevs": 3,
00:09:37.405    "num_base_bdevs_discovered": 3,
00:09:37.405    "num_base_bdevs_operational": 3,
00:09:37.405    "base_bdevs_list": [
00:09:37.405      {
00:09:37.405        "name": "BaseBdev1",
00:09:37.405        "uuid": "87b981f9-2ee5-4173-b01b-cf9325f55136",
00:09:37.405        "is_configured": true,
00:09:37.405        "data_offset": 2048,
00:09:37.405        "data_size": 63488
00:09:37.405      },
00:09:37.405      {
00:09:37.405        "name": "BaseBdev2",
00:09:37.405        "uuid": "998427d6-76b7-4b4a-908c-7df6ce291a7b",
00:09:37.405        "is_configured": true,
00:09:37.405        "data_offset": 2048,
00:09:37.405        "data_size": 63488
00:09:37.405      },
00:09:37.405      {
00:09:37.405        "name": "BaseBdev3",
00:09:37.405        "uuid": "f8304368-6195-4dc3-a9fd-4fdc710f7f55",
00:09:37.405        "is_configured": true,
00:09:37.405        "data_offset": 2048,
00:09:37.405        "data_size": 63488
00:09:37.405      }
00:09:37.405    ]
00:09:37.405  }'
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:37.405   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.679   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:09:37.679   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:09:37.679   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:37.679   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:37.679   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:09:37.679   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:37.679    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:09:37.679    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:37.679    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.679    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.679  [2024-12-16 11:31:03.698000] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:37.679    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:37.956    "name": "Existed_Raid",
00:09:37.956    "aliases": [
00:09:37.956      "e5a33852-030e-409f-96e7-d2ee8482dcd5"
00:09:37.956    ],
00:09:37.956    "product_name": "Raid Volume",
00:09:37.956    "block_size": 512,
00:09:37.956    "num_blocks": 190464,
00:09:37.956    "uuid": "e5a33852-030e-409f-96e7-d2ee8482dcd5",
00:09:37.956    "assigned_rate_limits": {
00:09:37.956      "rw_ios_per_sec": 0,
00:09:37.956      "rw_mbytes_per_sec": 0,
00:09:37.956      "r_mbytes_per_sec": 0,
00:09:37.956      "w_mbytes_per_sec": 0
00:09:37.956    },
00:09:37.956    "claimed": false,
00:09:37.956    "zoned": false,
00:09:37.956    "supported_io_types": {
00:09:37.956      "read": true,
00:09:37.956      "write": true,
00:09:37.956      "unmap": true,
00:09:37.956      "flush": true,
00:09:37.956      "reset": true,
00:09:37.956      "nvme_admin": false,
00:09:37.956      "nvme_io": false,
00:09:37.956      "nvme_io_md": false,
00:09:37.956      "write_zeroes": true,
00:09:37.956      "zcopy": false,
00:09:37.956      "get_zone_info": false,
00:09:37.956      "zone_management": false,
00:09:37.956      "zone_append": false,
00:09:37.956      "compare": false,
00:09:37.956      "compare_and_write": false,
00:09:37.956      "abort": false,
00:09:37.956      "seek_hole": false,
00:09:37.956      "seek_data": false,
00:09:37.956      "copy": false,
00:09:37.956      "nvme_iov_md": false
00:09:37.956    },
00:09:37.956    "memory_domains": [
00:09:37.956      {
00:09:37.956        "dma_device_id": "system",
00:09:37.956        "dma_device_type": 1
00:09:37.956      },
00:09:37.956      {
00:09:37.956        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:37.956        "dma_device_type": 2
00:09:37.956      },
00:09:37.956      {
00:09:37.956        "dma_device_id": "system",
00:09:37.956        "dma_device_type": 1
00:09:37.956      },
00:09:37.956      {
00:09:37.956        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:37.956        "dma_device_type": 2
00:09:37.956      },
00:09:37.956      {
00:09:37.956        "dma_device_id": "system",
00:09:37.956        "dma_device_type": 1
00:09:37.956      },
00:09:37.956      {
00:09:37.956        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:37.956        "dma_device_type": 2
00:09:37.956      }
00:09:37.956    ],
00:09:37.956    "driver_specific": {
00:09:37.956      "raid": {
00:09:37.956        "uuid": "e5a33852-030e-409f-96e7-d2ee8482dcd5",
00:09:37.956        "strip_size_kb": 64,
00:09:37.956        "state": "online",
00:09:37.956        "raid_level": "raid0",
00:09:37.956        "superblock": true,
00:09:37.956        "num_base_bdevs": 3,
00:09:37.956        "num_base_bdevs_discovered": 3,
00:09:37.956        "num_base_bdevs_operational": 3,
00:09:37.956        "base_bdevs_list": [
00:09:37.956          {
00:09:37.956            "name": "BaseBdev1",
00:09:37.956            "uuid": "87b981f9-2ee5-4173-b01b-cf9325f55136",
00:09:37.956            "is_configured": true,
00:09:37.956            "data_offset": 2048,
00:09:37.956            "data_size": 63488
00:09:37.956          },
00:09:37.956          {
00:09:37.956            "name": "BaseBdev2",
00:09:37.956            "uuid": "998427d6-76b7-4b4a-908c-7df6ce291a7b",
00:09:37.956            "is_configured": true,
00:09:37.956            "data_offset": 2048,
00:09:37.956            "data_size": 63488
00:09:37.956          },
00:09:37.956          {
00:09:37.956            "name": "BaseBdev3",
00:09:37.956            "uuid": "f8304368-6195-4dc3-a9fd-4fdc710f7f55",
00:09:37.956            "is_configured": true,
00:09:37.956            "data_offset": 2048,
00:09:37.956            "data_size": 63488
00:09:37.956          }
00:09:37.956        ]
00:09:37.956      }
00:09:37.956    }
00:09:37.956  }'
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:09:37.956  BaseBdev2
00:09:37.956  BaseBdev3'
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:37.956   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:37.956    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:09:37.957    11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:37.957    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.957    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.957    11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:37.957  [2024-12-16 11:31:03.985254] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:37.957  [2024-12-16 11:31:03.985339] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:37.957  [2024-12-16 11:31:03.985463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:37.957   11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:09:37.957   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:37.957   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:37.957   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:37.957   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:37.957   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:37.957   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:37.957   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:37.957    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:37.957    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:37.957    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:37.957    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.215    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.215   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:38.215    "name": "Existed_Raid",
00:09:38.215    "uuid": "e5a33852-030e-409f-96e7-d2ee8482dcd5",
00:09:38.215    "strip_size_kb": 64,
00:09:38.215    "state": "offline",
00:09:38.215    "raid_level": "raid0",
00:09:38.215    "superblock": true,
00:09:38.215    "num_base_bdevs": 3,
00:09:38.215    "num_base_bdevs_discovered": 2,
00:09:38.215    "num_base_bdevs_operational": 2,
00:09:38.215    "base_bdevs_list": [
00:09:38.215      {
00:09:38.215        "name": null,
00:09:38.215        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:38.215        "is_configured": false,
00:09:38.215        "data_offset": 0,
00:09:38.215        "data_size": 63488
00:09:38.215      },
00:09:38.215      {
00:09:38.215        "name": "BaseBdev2",
00:09:38.215        "uuid": "998427d6-76b7-4b4a-908c-7df6ce291a7b",
00:09:38.215        "is_configured": true,
00:09:38.215        "data_offset": 2048,
00:09:38.215        "data_size": 63488
00:09:38.215      },
00:09:38.215      {
00:09:38.215        "name": "BaseBdev3",
00:09:38.215        "uuid": "f8304368-6195-4dc3-a9fd-4fdc710f7f55",
00:09:38.215        "is_configured": true,
00:09:38.215        "data_offset": 2048,
00:09:38.215        "data_size": 63488
00:09:38.215      }
00:09:38.215    ]
00:09:38.215  }'
00:09:38.215   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:38.215   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.475  [2024-12-16 11:31:04.476638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.475    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.475   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.475  [2024-12-16 11:31:04.532851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:09:38.475  [2024-12-16 11:31:04.532925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:38.733    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:09:38.733    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:38.733    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.733    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.733    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.733  BaseBdev2
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:09:38.733   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.734  [
00:09:38.734  {
00:09:38.734  "name": "BaseBdev2",
00:09:38.734  "aliases": [
00:09:38.734  "bb2f1b16-e761-4d4c-b9bc-f90293624ac1"
00:09:38.734  ],
00:09:38.734  "product_name": "Malloc disk",
00:09:38.734  "block_size": 512,
00:09:38.734  "num_blocks": 65536,
00:09:38.734  "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:38.734  "assigned_rate_limits": {
00:09:38.734  "rw_ios_per_sec": 0,
00:09:38.734  "rw_mbytes_per_sec": 0,
00:09:38.734  "r_mbytes_per_sec": 0,
00:09:38.734  "w_mbytes_per_sec": 0
00:09:38.734  },
00:09:38.734  "claimed": false,
00:09:38.734  "zoned": false,
00:09:38.734  "supported_io_types": {
00:09:38.734  "read": true,
00:09:38.734  "write": true,
00:09:38.734  "unmap": true,
00:09:38.734  "flush": true,
00:09:38.734  "reset": true,
00:09:38.734  "nvme_admin": false,
00:09:38.734  "nvme_io": false,
00:09:38.734  "nvme_io_md": false,
00:09:38.734  "write_zeroes": true,
00:09:38.734  "zcopy": true,
00:09:38.734  "get_zone_info": false,
00:09:38.734  "zone_management": false,
00:09:38.734  "zone_append": false,
00:09:38.734  "compare": false,
00:09:38.734  "compare_and_write": false,
00:09:38.734  "abort": true,
00:09:38.734  "seek_hole": false,
00:09:38.734  "seek_data": false,
00:09:38.734  "copy": true,
00:09:38.734  "nvme_iov_md": false
00:09:38.734  },
00:09:38.734  "memory_domains": [
00:09:38.734  {
00:09:38.734  "dma_device_id": "system",
00:09:38.734  "dma_device_type": 1
00:09:38.734  },
00:09:38.734  {
00:09:38.734  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:38.734  "dma_device_type": 2
00:09:38.734  }
00:09:38.734  ],
00:09:38.734  "driver_specific": {}
00:09:38.734  }
00:09:38.734  ]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.734  BaseBdev3
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.734  [
00:09:38.734  {
00:09:38.734  "name": "BaseBdev3",
00:09:38.734  "aliases": [
00:09:38.734  "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc"
00:09:38.734  ],
00:09:38.734  "product_name": "Malloc disk",
00:09:38.734  "block_size": 512,
00:09:38.734  "num_blocks": 65536,
00:09:38.734  "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:38.734  "assigned_rate_limits": {
00:09:38.734  "rw_ios_per_sec": 0,
00:09:38.734  "rw_mbytes_per_sec": 0,
00:09:38.734  "r_mbytes_per_sec": 0,
00:09:38.734  "w_mbytes_per_sec": 0
00:09:38.734  },
00:09:38.734  "claimed": false,
00:09:38.734  "zoned": false,
00:09:38.734  "supported_io_types": {
00:09:38.734  "read": true,
00:09:38.734  "write": true,
00:09:38.734  "unmap": true,
00:09:38.734  "flush": true,
00:09:38.734  "reset": true,
00:09:38.734  "nvme_admin": false,
00:09:38.734  "nvme_io": false,
00:09:38.734  "nvme_io_md": false,
00:09:38.734  "write_zeroes": true,
00:09:38.734  "zcopy": true,
00:09:38.734  "get_zone_info": false,
00:09:38.734  "zone_management": false,
00:09:38.734  "zone_append": false,
00:09:38.734  "compare": false,
00:09:38.734  "compare_and_write": false,
00:09:38.734  "abort": true,
00:09:38.734  "seek_hole": false,
00:09:38.734  "seek_data": false,
00:09:38.734  "copy": true,
00:09:38.734  "nvme_iov_md": false
00:09:38.734  },
00:09:38.734  "memory_domains": [
00:09:38.734  {
00:09:38.734  "dma_device_id": "system",
00:09:38.734  "dma_device_type": 1
00:09:38.734  },
00:09:38.734  {
00:09:38.734  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:38.734  "dma_device_type": 2
00:09:38.734  }
00:09:38.734  ],
00:09:38.734  "driver_specific": {}
00:09:38.734  }
00:09:38.734  ]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.734  [2024-12-16 11:31:04.701391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:38.734  [2024-12-16 11:31:04.701555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:38.734  [2024-12-16 11:31:04.701614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:38.734  [2024-12-16 11:31:04.703795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:38.734    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:38.734    11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:38.734    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:38.734    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:38.734    11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:38.734   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:38.734    "name": "Existed_Raid",
00:09:38.734    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:38.734    "strip_size_kb": 64,
00:09:38.734    "state": "configuring",
00:09:38.734    "raid_level": "raid0",
00:09:38.734    "superblock": true,
00:09:38.734    "num_base_bdevs": 3,
00:09:38.734    "num_base_bdevs_discovered": 2,
00:09:38.734    "num_base_bdevs_operational": 3,
00:09:38.734    "base_bdevs_list": [
00:09:38.734      {
00:09:38.734        "name": "BaseBdev1",
00:09:38.734        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:38.734        "is_configured": false,
00:09:38.734        "data_offset": 0,
00:09:38.734        "data_size": 0
00:09:38.734      },
00:09:38.734      {
00:09:38.734        "name": "BaseBdev2",
00:09:38.734        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:38.734        "is_configured": true,
00:09:38.734        "data_offset": 2048,
00:09:38.734        "data_size": 63488
00:09:38.734      },
00:09:38.734      {
00:09:38.734        "name": "BaseBdev3",
00:09:38.734        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:38.734        "is_configured": true,
00:09:38.734        "data_offset": 2048,
00:09:38.735        "data_size": 63488
00:09:38.735      }
00:09:38.735    ]
00:09:38.735  }'
00:09:38.735   11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:38.735   11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.300  [2024-12-16 11:31:05.200632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:39.300   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:39.300    11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:39.301    11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:39.301    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:39.301    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.301    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:39.301   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:39.301    "name": "Existed_Raid",
00:09:39.301    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:39.301    "strip_size_kb": 64,
00:09:39.301    "state": "configuring",
00:09:39.301    "raid_level": "raid0",
00:09:39.301    "superblock": true,
00:09:39.301    "num_base_bdevs": 3,
00:09:39.301    "num_base_bdevs_discovered": 1,
00:09:39.301    "num_base_bdevs_operational": 3,
00:09:39.301    "base_bdevs_list": [
00:09:39.301      {
00:09:39.301        "name": "BaseBdev1",
00:09:39.301        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:39.301        "is_configured": false,
00:09:39.301        "data_offset": 0,
00:09:39.301        "data_size": 0
00:09:39.301      },
00:09:39.301      {
00:09:39.301        "name": null,
00:09:39.301        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:39.301        "is_configured": false,
00:09:39.301        "data_offset": 0,
00:09:39.301        "data_size": 63488
00:09:39.301      },
00:09:39.301      {
00:09:39.301        "name": "BaseBdev3",
00:09:39.301        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:39.301        "is_configured": true,
00:09:39.301        "data_offset": 2048,
00:09:39.301        "data_size": 63488
00:09:39.301      }
00:09:39.301    ]
00:09:39.301  }'
00:09:39.301   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:39.301   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.868    11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:39.868    11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:09:39.868    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:39.868    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.869    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.869  BaseBdev1
00:09:39.869  [2024-12-16 11:31:05.771172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.869  [
00:09:39.869  {
00:09:39.869  "name": "BaseBdev1",
00:09:39.869  "aliases": [
00:09:39.869  "a70e0373-e191-4efc-9116-6a962b19687f"
00:09:39.869  ],
00:09:39.869  "product_name": "Malloc disk",
00:09:39.869  "block_size": 512,
00:09:39.869  "num_blocks": 65536,
00:09:39.869  "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:39.869  "assigned_rate_limits": {
00:09:39.869  "rw_ios_per_sec": 0,
00:09:39.869  "rw_mbytes_per_sec": 0,
00:09:39.869  "r_mbytes_per_sec": 0,
00:09:39.869  "w_mbytes_per_sec": 0
00:09:39.869  },
00:09:39.869  "claimed": true,
00:09:39.869  "claim_type": "exclusive_write",
00:09:39.869  "zoned": false,
00:09:39.869  "supported_io_types": {
00:09:39.869  "read": true,
00:09:39.869  "write": true,
00:09:39.869  "unmap": true,
00:09:39.869  "flush": true,
00:09:39.869  "reset": true,
00:09:39.869  "nvme_admin": false,
00:09:39.869  "nvme_io": false,
00:09:39.869  "nvme_io_md": false,
00:09:39.869  "write_zeroes": true,
00:09:39.869  "zcopy": true,
00:09:39.869  "get_zone_info": false,
00:09:39.869  "zone_management": false,
00:09:39.869  "zone_append": false,
00:09:39.869  "compare": false,
00:09:39.869  "compare_and_write": false,
00:09:39.869  "abort": true,
00:09:39.869  "seek_hole": false,
00:09:39.869  "seek_data": false,
00:09:39.869  "copy": true,
00:09:39.869  "nvme_iov_md": false
00:09:39.869  },
00:09:39.869  "memory_domains": [
00:09:39.869  {
00:09:39.869  "dma_device_id": "system",
00:09:39.869  "dma_device_type": 1
00:09:39.869  },
00:09:39.869  {
00:09:39.869  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:39.869  "dma_device_type": 2
00:09:39.869  }
00:09:39.869  ],
00:09:39.869  "driver_specific": {}
00:09:39.869  }
00:09:39.869  ]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:39.869    11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:39.869    11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:39.869    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:39.869    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:39.869    11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:39.869    "name": "Existed_Raid",
00:09:39.869    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:39.869    "strip_size_kb": 64,
00:09:39.869    "state": "configuring",
00:09:39.869    "raid_level": "raid0",
00:09:39.869    "superblock": true,
00:09:39.869    "num_base_bdevs": 3,
00:09:39.869    "num_base_bdevs_discovered": 2,
00:09:39.869    "num_base_bdevs_operational": 3,
00:09:39.869    "base_bdevs_list": [
00:09:39.869      {
00:09:39.869        "name": "BaseBdev1",
00:09:39.869        "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:39.869        "is_configured": true,
00:09:39.869        "data_offset": 2048,
00:09:39.869        "data_size": 63488
00:09:39.869      },
00:09:39.869      {
00:09:39.869        "name": null,
00:09:39.869        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:39.869        "is_configured": false,
00:09:39.869        "data_offset": 0,
00:09:39.869        "data_size": 63488
00:09:39.869      },
00:09:39.869      {
00:09:39.869        "name": "BaseBdev3",
00:09:39.869        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:39.869        "is_configured": true,
00:09:39.869        "data_offset": 2048,
00:09:39.869        "data_size": 63488
00:09:39.869      }
00:09:39.869    ]
00:09:39.869  }'
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:39.869   11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:40.436  [2024-12-16 11:31:06.386267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:40.436    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:40.436   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:40.436    "name": "Existed_Raid",
00:09:40.436    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:40.436    "strip_size_kb": 64,
00:09:40.436    "state": "configuring",
00:09:40.436    "raid_level": "raid0",
00:09:40.436    "superblock": true,
00:09:40.436    "num_base_bdevs": 3,
00:09:40.436    "num_base_bdevs_discovered": 1,
00:09:40.436    "num_base_bdevs_operational": 3,
00:09:40.436    "base_bdevs_list": [
00:09:40.436      {
00:09:40.436        "name": "BaseBdev1",
00:09:40.436        "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:40.436        "is_configured": true,
00:09:40.436        "data_offset": 2048,
00:09:40.437        "data_size": 63488
00:09:40.437      },
00:09:40.437      {
00:09:40.437        "name": null,
00:09:40.437        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:40.437        "is_configured": false,
00:09:40.437        "data_offset": 0,
00:09:40.437        "data_size": 63488
00:09:40.437      },
00:09:40.437      {
00:09:40.437        "name": null,
00:09:40.437        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:40.437        "is_configured": false,
00:09:40.437        "data_offset": 0,
00:09:40.437        "data_size": 63488
00:09:40.437      }
00:09:40.437    ]
00:09:40.437  }'
00:09:40.437   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:40.437   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.004  [2024-12-16 11:31:06.885473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.004    11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:41.004    "name": "Existed_Raid",
00:09:41.004    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:41.004    "strip_size_kb": 64,
00:09:41.004    "state": "configuring",
00:09:41.004    "raid_level": "raid0",
00:09:41.004    "superblock": true,
00:09:41.004    "num_base_bdevs": 3,
00:09:41.004    "num_base_bdevs_discovered": 2,
00:09:41.004    "num_base_bdevs_operational": 3,
00:09:41.004    "base_bdevs_list": [
00:09:41.004      {
00:09:41.004        "name": "BaseBdev1",
00:09:41.004        "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:41.004        "is_configured": true,
00:09:41.004        "data_offset": 2048,
00:09:41.004        "data_size": 63488
00:09:41.004      },
00:09:41.004      {
00:09:41.004        "name": null,
00:09:41.004        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:41.004        "is_configured": false,
00:09:41.004        "data_offset": 0,
00:09:41.004        "data_size": 63488
00:09:41.004      },
00:09:41.004      {
00:09:41.004        "name": "BaseBdev3",
00:09:41.004        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:41.004        "is_configured": true,
00:09:41.004        "data_offset": 2048,
00:09:41.004        "data_size": 63488
00:09:41.004      }
00:09:41.004    ]
00:09:41.004  }'
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:41.004   11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.573  [2024-12-16 11:31:07.380716] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.573    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.573   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:41.573    "name": "Existed_Raid",
00:09:41.573    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:41.574    "strip_size_kb": 64,
00:09:41.574    "state": "configuring",
00:09:41.574    "raid_level": "raid0",
00:09:41.574    "superblock": true,
00:09:41.574    "num_base_bdevs": 3,
00:09:41.574    "num_base_bdevs_discovered": 1,
00:09:41.574    "num_base_bdevs_operational": 3,
00:09:41.574    "base_bdevs_list": [
00:09:41.574      {
00:09:41.574        "name": null,
00:09:41.574        "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:41.574        "is_configured": false,
00:09:41.574        "data_offset": 0,
00:09:41.574        "data_size": 63488
00:09:41.574      },
00:09:41.574      {
00:09:41.574        "name": null,
00:09:41.574        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:41.574        "is_configured": false,
00:09:41.574        "data_offset": 0,
00:09:41.574        "data_size": 63488
00:09:41.574      },
00:09:41.574      {
00:09:41.574        "name": "BaseBdev3",
00:09:41.574        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:41.574        "is_configured": true,
00:09:41.574        "data_offset": 2048,
00:09:41.574        "data_size": 63488
00:09:41.574      }
00:09:41.574    ]
00:09:41.574  }'
00:09:41.574   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:41.574   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.833  [2024-12-16 11:31:07.866912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:41.833   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:41.833    11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:42.092   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:42.093    "name": "Existed_Raid",
00:09:42.093    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:42.093    "strip_size_kb": 64,
00:09:42.093    "state": "configuring",
00:09:42.093    "raid_level": "raid0",
00:09:42.093    "superblock": true,
00:09:42.093    "num_base_bdevs": 3,
00:09:42.093    "num_base_bdevs_discovered": 2,
00:09:42.093    "num_base_bdevs_operational": 3,
00:09:42.093    "base_bdevs_list": [
00:09:42.093      {
00:09:42.093        "name": null,
00:09:42.093        "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:42.093        "is_configured": false,
00:09:42.093        "data_offset": 0,
00:09:42.093        "data_size": 63488
00:09:42.093      },
00:09:42.093      {
00:09:42.093        "name": "BaseBdev2",
00:09:42.093        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:42.093        "is_configured": true,
00:09:42.093        "data_offset": 2048,
00:09:42.093        "data_size": 63488
00:09:42.093      },
00:09:42.093      {
00:09:42.093        "name": "BaseBdev3",
00:09:42.093        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:42.093        "is_configured": true,
00:09:42.093        "data_offset": 2048,
00:09:42.093        "data_size": 63488
00:09:42.093      }
00:09:42.093    ]
00:09:42.093  }'
00:09:42.093   11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:42.093   11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:42.352   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:09:42.352    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a70e0373-e191-4efc-9116-6a962b19687f
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.612  [2024-12-16 11:31:08.433368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:09:42.612  [2024-12-16 11:31:08.433594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:09:42.612  [2024-12-16 11:31:08.433615] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:09:42.612  NewBaseBdev
00:09:42.612  [2024-12-16 11:31:08.433911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:09:42.612  [2024-12-16 11:31:08.434051] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:09:42.612  [2024-12-16 11:31:08.434064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:09:42.612  [2024-12-16 11:31:08.434183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.612  [
00:09:42.612  {
00:09:42.612  "name": "NewBaseBdev",
00:09:42.612  "aliases": [
00:09:42.612  "a70e0373-e191-4efc-9116-6a962b19687f"
00:09:42.612  ],
00:09:42.612  "product_name": "Malloc disk",
00:09:42.612  "block_size": 512,
00:09:42.612  "num_blocks": 65536,
00:09:42.612  "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:42.612  "assigned_rate_limits": {
00:09:42.612  "rw_ios_per_sec": 0,
00:09:42.612  "rw_mbytes_per_sec": 0,
00:09:42.612  "r_mbytes_per_sec": 0,
00:09:42.612  "w_mbytes_per_sec": 0
00:09:42.612  },
00:09:42.612  "claimed": true,
00:09:42.612  "claim_type": "exclusive_write",
00:09:42.612  "zoned": false,
00:09:42.612  "supported_io_types": {
00:09:42.612  "read": true,
00:09:42.612  "write": true,
00:09:42.612  "unmap": true,
00:09:42.612  "flush": true,
00:09:42.612  "reset": true,
00:09:42.612  "nvme_admin": false,
00:09:42.612  "nvme_io": false,
00:09:42.612  "nvme_io_md": false,
00:09:42.612  "write_zeroes": true,
00:09:42.612  "zcopy": true,
00:09:42.612  "get_zone_info": false,
00:09:42.612  "zone_management": false,
00:09:42.612  "zone_append": false,
00:09:42.612  "compare": false,
00:09:42.612  "compare_and_write": false,
00:09:42.612  "abort": true,
00:09:42.612  "seek_hole": false,
00:09:42.612  "seek_data": false,
00:09:42.612  "copy": true,
00:09:42.612  "nvme_iov_md": false
00:09:42.612  },
00:09:42.612  "memory_domains": [
00:09:42.612  {
00:09:42.612  "dma_device_id": "system",
00:09:42.612  "dma_device_type": 1
00:09:42.612  },
00:09:42.612  {
00:09:42.612  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:42.612  "dma_device_type": 2
00:09:42.612  }
00:09:42.612  ],
00:09:42.612  "driver_specific": {}
00:09:42.612  }
00:09:42.612  ]
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:42.612    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:42.612    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:42.612    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:42.612    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.612    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:42.612   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:42.612    "name": "Existed_Raid",
00:09:42.612    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:42.612    "strip_size_kb": 64,
00:09:42.612    "state": "online",
00:09:42.612    "raid_level": "raid0",
00:09:42.612    "superblock": true,
00:09:42.612    "num_base_bdevs": 3,
00:09:42.612    "num_base_bdevs_discovered": 3,
00:09:42.612    "num_base_bdevs_operational": 3,
00:09:42.612    "base_bdevs_list": [
00:09:42.612      {
00:09:42.612        "name": "NewBaseBdev",
00:09:42.612        "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:42.612        "is_configured": true,
00:09:42.612        "data_offset": 2048,
00:09:42.612        "data_size": 63488
00:09:42.612      },
00:09:42.612      {
00:09:42.612        "name": "BaseBdev2",
00:09:42.612        "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:42.612        "is_configured": true,
00:09:42.612        "data_offset": 2048,
00:09:42.612        "data_size": 63488
00:09:42.612      },
00:09:42.613      {
00:09:42.613        "name": "BaseBdev3",
00:09:42.613        "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:42.613        "is_configured": true,
00:09:42.613        "data_offset": 2048,
00:09:42.613        "data_size": 63488
00:09:42.613      }
00:09:42.613    ]
00:09:42.613  }'
00:09:42.613   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:42.613   11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.872   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:09:42.872   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:09:42.872   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:42.872   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:42.872   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:09:42.872   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:42.872    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:09:42.872    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:42.872    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:42.872    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:42.872  [2024-12-16 11:31:08.901061] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:42.872    11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:43.149   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:43.149    "name": "Existed_Raid",
00:09:43.149    "aliases": [
00:09:43.149      "34096f7b-465e-4596-b1fd-0cb4f55c502c"
00:09:43.149    ],
00:09:43.149    "product_name": "Raid Volume",
00:09:43.149    "block_size": 512,
00:09:43.149    "num_blocks": 190464,
00:09:43.149    "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:43.149    "assigned_rate_limits": {
00:09:43.149      "rw_ios_per_sec": 0,
00:09:43.149      "rw_mbytes_per_sec": 0,
00:09:43.149      "r_mbytes_per_sec": 0,
00:09:43.149      "w_mbytes_per_sec": 0
00:09:43.149    },
00:09:43.149    "claimed": false,
00:09:43.149    "zoned": false,
00:09:43.149    "supported_io_types": {
00:09:43.149      "read": true,
00:09:43.149      "write": true,
00:09:43.149      "unmap": true,
00:09:43.149      "flush": true,
00:09:43.149      "reset": true,
00:09:43.149      "nvme_admin": false,
00:09:43.149      "nvme_io": false,
00:09:43.149      "nvme_io_md": false,
00:09:43.149      "write_zeroes": true,
00:09:43.149      "zcopy": false,
00:09:43.149      "get_zone_info": false,
00:09:43.150      "zone_management": false,
00:09:43.150      "zone_append": false,
00:09:43.150      "compare": false,
00:09:43.150      "compare_and_write": false,
00:09:43.150      "abort": false,
00:09:43.150      "seek_hole": false,
00:09:43.150      "seek_data": false,
00:09:43.150      "copy": false,
00:09:43.150      "nvme_iov_md": false
00:09:43.150    },
00:09:43.150    "memory_domains": [
00:09:43.150      {
00:09:43.150        "dma_device_id": "system",
00:09:43.150        "dma_device_type": 1
00:09:43.150      },
00:09:43.150      {
00:09:43.150        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:43.150        "dma_device_type": 2
00:09:43.150      },
00:09:43.150      {
00:09:43.150        "dma_device_id": "system",
00:09:43.150        "dma_device_type": 1
00:09:43.150      },
00:09:43.150      {
00:09:43.150        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:43.150        "dma_device_type": 2
00:09:43.150      },
00:09:43.150      {
00:09:43.150        "dma_device_id": "system",
00:09:43.150        "dma_device_type": 1
00:09:43.150      },
00:09:43.150      {
00:09:43.150        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:43.150        "dma_device_type": 2
00:09:43.150      }
00:09:43.150    ],
00:09:43.150    "driver_specific": {
00:09:43.150      "raid": {
00:09:43.150        "uuid": "34096f7b-465e-4596-b1fd-0cb4f55c502c",
00:09:43.150        "strip_size_kb": 64,
00:09:43.150        "state": "online",
00:09:43.150        "raid_level": "raid0",
00:09:43.150        "superblock": true,
00:09:43.150        "num_base_bdevs": 3,
00:09:43.150        "num_base_bdevs_discovered": 3,
00:09:43.150        "num_base_bdevs_operational": 3,
00:09:43.150        "base_bdevs_list": [
00:09:43.150          {
00:09:43.150            "name": "NewBaseBdev",
00:09:43.150            "uuid": "a70e0373-e191-4efc-9116-6a962b19687f",
00:09:43.150            "is_configured": true,
00:09:43.150            "data_offset": 2048,
00:09:43.150            "data_size": 63488
00:09:43.150          },
00:09:43.150          {
00:09:43.150            "name": "BaseBdev2",
00:09:43.150            "uuid": "bb2f1b16-e761-4d4c-b9bc-f90293624ac1",
00:09:43.150            "is_configured": true,
00:09:43.150            "data_offset": 2048,
00:09:43.150            "data_size": 63488
00:09:43.150          },
00:09:43.150          {
00:09:43.150            "name": "BaseBdev3",
00:09:43.150            "uuid": "47dcf94d-7cc6-49e5-8245-c9aa0ecf90dc",
00:09:43.150            "is_configured": true,
00:09:43.150            "data_offset": 2048,
00:09:43.150            "data_size": 63488
00:09:43.150          }
00:09:43.150        ]
00:09:43.150      }
00:09:43.150    }
00:09:43.150  }'
00:09:43.150    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:43.150   11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:09:43.150  BaseBdev2
00:09:43.150  BaseBdev3'
00:09:43.150    11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:43.150  [2024-12-16 11:31:09.188218] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:43.150  [2024-12-16 11:31:09.188318] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:43.150  [2024-12-16 11:31:09.188434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:43.150  [2024-12-16 11:31:09.188527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:43.150  [2024-12-16 11:31:09.188605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75918
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75918 ']'
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75918
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:09:43.150   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:43.150    11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75918
00:09:43.410   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:43.410   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:43.410   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75918'
00:09:43.410  killing process with pid 75918
00:09:43.410   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75918
00:09:43.410  [2024-12-16 11:31:09.225817] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:43.410   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75918
00:09:43.410  [2024-12-16 11:31:09.258974] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:43.669   11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:09:43.669  
00:09:43.669  real	0m9.281s
00:09:43.669  user	0m15.802s
00:09:43.669  sys	0m1.952s
00:09:43.669   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:43.669   11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:09:43.669  ************************************
00:09:43.669  END TEST raid_state_function_test_sb
00:09:43.669  ************************************
00:09:43.669   11:31:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3
00:09:43.669   11:31:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:09:43.669   11:31:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:43.669   11:31:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:43.669  ************************************
00:09:43.669  START TEST raid_superblock_test
00:09:43.669  ************************************
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']'
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76527
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76527
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76527 ']'
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:43.669  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:43.669   11:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:43.669  [2024-12-16 11:31:09.689694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:43.669  [2024-12-16 11:31:09.689935] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76527 ]
00:09:43.928  [2024-12-16 11:31:09.847333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:43.928  [2024-12-16 11:31:09.900954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.928  [2024-12-16 11:31:09.947076] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:43.928  [2024-12-16 11:31:09.947213] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.865  malloc1
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.865  [2024-12-16 11:31:10.643757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:09:44.865  [2024-12-16 11:31:10.643919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:44.865  [2024-12-16 11:31:10.643975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:09:44.865  [2024-12-16 11:31:10.644020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:44.865  [2024-12-16 11:31:10.646633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:44.865  [2024-12-16 11:31:10.646725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:09:44.865  pt1
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.865  malloc2
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.865  [2024-12-16 11:31:10.684548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:09:44.865  [2024-12-16 11:31:10.684691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:44.865  [2024-12-16 11:31:10.684734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:09:44.865  [2024-12-16 11:31:10.684775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:44.865  [2024-12-16 11:31:10.687283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:44.865  [2024-12-16 11:31:10.687381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:09:44.865  pt2
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.865  malloc3
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.865  [2024-12-16 11:31:10.710265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:09:44.865  [2024-12-16 11:31:10.710342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:44.865  [2024-12-16 11:31:10.710362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:09:44.865  [2024-12-16 11:31:10.710375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:44.865  [2024-12-16 11:31:10.712914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:44.865  [2024-12-16 11:31:10.712962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:09:44.865  pt3
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.865  [2024-12-16 11:31:10.722309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:09:44.865  [2024-12-16 11:31:10.724680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:09:44.865  [2024-12-16 11:31:10.724813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:09:44.865  [2024-12-16 11:31:10.725031] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:09:44.865  [2024-12-16 11:31:10.725092] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:09:44.865  [2024-12-16 11:31:10.725445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:09:44.865  [2024-12-16 11:31:10.725666] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:09:44.865  [2024-12-16 11:31:10.725729] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:09:44.865  [2024-12-16 11:31:10.725938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:44.865   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:44.866    11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:44.866    11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.866    11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:44.866    11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:44.866    11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:44.866    "name": "raid_bdev1",
00:09:44.866    "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:44.866    "strip_size_kb": 64,
00:09:44.866    "state": "online",
00:09:44.866    "raid_level": "raid0",
00:09:44.866    "superblock": true,
00:09:44.866    "num_base_bdevs": 3,
00:09:44.866    "num_base_bdevs_discovered": 3,
00:09:44.866    "num_base_bdevs_operational": 3,
00:09:44.866    "base_bdevs_list": [
00:09:44.866      {
00:09:44.866        "name": "pt1",
00:09:44.866        "uuid": "00000000-0000-0000-0000-000000000001",
00:09:44.866        "is_configured": true,
00:09:44.866        "data_offset": 2048,
00:09:44.866        "data_size": 63488
00:09:44.866      },
00:09:44.866      {
00:09:44.866        "name": "pt2",
00:09:44.866        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:44.866        "is_configured": true,
00:09:44.866        "data_offset": 2048,
00:09:44.866        "data_size": 63488
00:09:44.866      },
00:09:44.866      {
00:09:44.866        "name": "pt3",
00:09:44.866        "uuid": "00000000-0000-0000-0000-000000000003",
00:09:44.866        "is_configured": true,
00:09:44.866        "data_offset": 2048,
00:09:44.866        "data_size": 63488
00:09:44.866      }
00:09:44.866    ]
00:09:44.866  }'
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:44.866   11:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.124   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:09:45.124   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:09:45.124   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:45.124   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:45.124   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:45.124   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:45.124    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:45.124    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:45.124    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.124    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.124  [2024-12-16 11:31:11.186023] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:45.383    "name": "raid_bdev1",
00:09:45.383    "aliases": [
00:09:45.383      "ba53b4c3-173c-4a90-8ee9-22533041678d"
00:09:45.383    ],
00:09:45.383    "product_name": "Raid Volume",
00:09:45.383    "block_size": 512,
00:09:45.383    "num_blocks": 190464,
00:09:45.383    "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:45.383    "assigned_rate_limits": {
00:09:45.383      "rw_ios_per_sec": 0,
00:09:45.383      "rw_mbytes_per_sec": 0,
00:09:45.383      "r_mbytes_per_sec": 0,
00:09:45.383      "w_mbytes_per_sec": 0
00:09:45.383    },
00:09:45.383    "claimed": false,
00:09:45.383    "zoned": false,
00:09:45.383    "supported_io_types": {
00:09:45.383      "read": true,
00:09:45.383      "write": true,
00:09:45.383      "unmap": true,
00:09:45.383      "flush": true,
00:09:45.383      "reset": true,
00:09:45.383      "nvme_admin": false,
00:09:45.383      "nvme_io": false,
00:09:45.383      "nvme_io_md": false,
00:09:45.383      "write_zeroes": true,
00:09:45.383      "zcopy": false,
00:09:45.383      "get_zone_info": false,
00:09:45.383      "zone_management": false,
00:09:45.383      "zone_append": false,
00:09:45.383      "compare": false,
00:09:45.383      "compare_and_write": false,
00:09:45.383      "abort": false,
00:09:45.383      "seek_hole": false,
00:09:45.383      "seek_data": false,
00:09:45.383      "copy": false,
00:09:45.383      "nvme_iov_md": false
00:09:45.383    },
00:09:45.383    "memory_domains": [
00:09:45.383      {
00:09:45.383        "dma_device_id": "system",
00:09:45.383        "dma_device_type": 1
00:09:45.383      },
00:09:45.383      {
00:09:45.383        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:45.383        "dma_device_type": 2
00:09:45.383      },
00:09:45.383      {
00:09:45.383        "dma_device_id": "system",
00:09:45.383        "dma_device_type": 1
00:09:45.383      },
00:09:45.383      {
00:09:45.383        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:45.383        "dma_device_type": 2
00:09:45.383      },
00:09:45.383      {
00:09:45.383        "dma_device_id": "system",
00:09:45.383        "dma_device_type": 1
00:09:45.383      },
00:09:45.383      {
00:09:45.383        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:45.383        "dma_device_type": 2
00:09:45.383      }
00:09:45.383    ],
00:09:45.383    "driver_specific": {
00:09:45.383      "raid": {
00:09:45.383        "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:45.383        "strip_size_kb": 64,
00:09:45.383        "state": "online",
00:09:45.383        "raid_level": "raid0",
00:09:45.383        "superblock": true,
00:09:45.383        "num_base_bdevs": 3,
00:09:45.383        "num_base_bdevs_discovered": 3,
00:09:45.383        "num_base_bdevs_operational": 3,
00:09:45.383        "base_bdevs_list": [
00:09:45.383          {
00:09:45.383            "name": "pt1",
00:09:45.383            "uuid": "00000000-0000-0000-0000-000000000001",
00:09:45.383            "is_configured": true,
00:09:45.383            "data_offset": 2048,
00:09:45.383            "data_size": 63488
00:09:45.383          },
00:09:45.383          {
00:09:45.383            "name": "pt2",
00:09:45.383            "uuid": "00000000-0000-0000-0000-000000000002",
00:09:45.383            "is_configured": true,
00:09:45.383            "data_offset": 2048,
00:09:45.383            "data_size": 63488
00:09:45.383          },
00:09:45.383          {
00:09:45.383            "name": "pt3",
00:09:45.383            "uuid": "00000000-0000-0000-0000-000000000003",
00:09:45.383            "is_configured": true,
00:09:45.383            "data_offset": 2048,
00:09:45.383            "data_size": 63488
00:09:45.383          }
00:09:45.383        ]
00:09:45.383      }
00:09:45.383    }
00:09:45.383  }'
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:09:45.383  pt2
00:09:45.383  pt3'
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:45.383   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:45.383    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.384    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.643  [2024-12-16 11:31:11.481495] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ba53b4c3-173c-4a90-8ee9-22533041678d
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ba53b4c3-173c-4a90-8ee9-22533041678d ']'
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.643  [2024-12-16 11:31:11.525017] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:45.643  [2024-12-16 11:31:11.525060] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:45.643  [2024-12-16 11:31:11.525158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:45.643  [2024-12-16 11:31:11.525238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:45.643  [2024-12-16 11:31:11.525264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:45.643    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:09:45.643   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.644  [2024-12-16 11:31:11.692779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:09:45.644  [2024-12-16 11:31:11.695061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:09:45.644  [2024-12-16 11:31:11.695117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:09:45.644  [2024-12-16 11:31:11.695172] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:09:45.644  [2024-12-16 11:31:11.695224] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:09:45.644  [2024-12-16 11:31:11.695248] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:09:45.644  [2024-12-16 11:31:11.695274] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:45.644  [2024-12-16 11:31:11.695288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:09:45.644  request:
00:09:45.644  {
00:09:45.644  "name": "raid_bdev1",
00:09:45.644  "raid_level": "raid0",
00:09:45.644  "base_bdevs": [
00:09:45.644  "malloc1",
00:09:45.644  "malloc2",
00:09:45.644  "malloc3"
00:09:45.644  ],
00:09:45.644  "strip_size_kb": 64,
00:09:45.644  "superblock": false,
00:09:45.644  "method": "bdev_raid_create",
00:09:45.644  "req_id": 1
00:09:45.644  }
00:09:45.644  Got JSON-RPC error response
00:09:45.644  response:
00:09:45.644  {
00:09:45.644  "code": -17,
00:09:45.644  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:09:45.644  }
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:45.644   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:45.902    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:45.902    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.902    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:09:45.902    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.902    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.902  [2024-12-16 11:31:11.760682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:09:45.902  [2024-12-16 11:31:11.760751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:45.902  [2024-12-16 11:31:11.760770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:09:45.902  [2024-12-16 11:31:11.760783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:45.902  [2024-12-16 11:31:11.763344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:45.902  [2024-12-16 11:31:11.763396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:09:45.902  [2024-12-16 11:31:11.763480] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:09:45.902  [2024-12-16 11:31:11.763520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:09:45.902  pt1
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:45.902   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:45.903   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:45.903    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:45.903    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:45.903    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:45.903    11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:45.903    11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:45.903   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:45.903    "name": "raid_bdev1",
00:09:45.903    "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:45.903    "strip_size_kb": 64,
00:09:45.903    "state": "configuring",
00:09:45.903    "raid_level": "raid0",
00:09:45.903    "superblock": true,
00:09:45.903    "num_base_bdevs": 3,
00:09:45.903    "num_base_bdevs_discovered": 1,
00:09:45.903    "num_base_bdevs_operational": 3,
00:09:45.903    "base_bdevs_list": [
00:09:45.903      {
00:09:45.903        "name": "pt1",
00:09:45.903        "uuid": "00000000-0000-0000-0000-000000000001",
00:09:45.903        "is_configured": true,
00:09:45.903        "data_offset": 2048,
00:09:45.903        "data_size": 63488
00:09:45.903      },
00:09:45.903      {
00:09:45.903        "name": null,
00:09:45.903        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:45.903        "is_configured": false,
00:09:45.903        "data_offset": 2048,
00:09:45.903        "data_size": 63488
00:09:45.903      },
00:09:45.903      {
00:09:45.903        "name": null,
00:09:45.903        "uuid": "00000000-0000-0000-0000-000000000003",
00:09:45.903        "is_configured": false,
00:09:45.903        "data_offset": 2048,
00:09:45.903        "data_size": 63488
00:09:45.903      }
00:09:45.903    ]
00:09:45.903  }'
00:09:45.903   11:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:45.903   11:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']'
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.470  [2024-12-16 11:31:12.268036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:09:46.470  [2024-12-16 11:31:12.268215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:46.470  [2024-12-16 11:31:12.268243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:09:46.470  [2024-12-16 11:31:12.268259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:46.470  [2024-12-16 11:31:12.268731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:46.470  [2024-12-16 11:31:12.268763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:09:46.470  [2024-12-16 11:31:12.268849] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:09:46.470  [2024-12-16 11:31:12.268876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:09:46.470  pt2
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.470  [2024-12-16 11:31:12.280012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:46.470    11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:46.470    11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:46.470    11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:46.470    11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.470    11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:46.470   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:46.470    "name": "raid_bdev1",
00:09:46.470    "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:46.470    "strip_size_kb": 64,
00:09:46.470    "state": "configuring",
00:09:46.470    "raid_level": "raid0",
00:09:46.470    "superblock": true,
00:09:46.470    "num_base_bdevs": 3,
00:09:46.470    "num_base_bdevs_discovered": 1,
00:09:46.470    "num_base_bdevs_operational": 3,
00:09:46.470    "base_bdevs_list": [
00:09:46.470      {
00:09:46.470        "name": "pt1",
00:09:46.470        "uuid": "00000000-0000-0000-0000-000000000001",
00:09:46.470        "is_configured": true,
00:09:46.470        "data_offset": 2048,
00:09:46.470        "data_size": 63488
00:09:46.470      },
00:09:46.470      {
00:09:46.470        "name": null,
00:09:46.470        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:46.471        "is_configured": false,
00:09:46.471        "data_offset": 0,
00:09:46.471        "data_size": 63488
00:09:46.471      },
00:09:46.471      {
00:09:46.471        "name": null,
00:09:46.471        "uuid": "00000000-0000-0000-0000-000000000003",
00:09:46.471        "is_configured": false,
00:09:46.471        "data_offset": 2048,
00:09:46.471        "data_size": 63488
00:09:46.471      }
00:09:46.471    ]
00:09:46.471  }'
00:09:46.471   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:46.471   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.730  [2024-12-16 11:31:12.743739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:09:46.730  [2024-12-16 11:31:12.743826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:46.730  [2024-12-16 11:31:12.743850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:09:46.730  [2024-12-16 11:31:12.743861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:46.730  [2024-12-16 11:31:12.744321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:46.730  [2024-12-16 11:31:12.744342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:09:46.730  [2024-12-16 11:31:12.744431] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:09:46.730  [2024-12-16 11:31:12.744455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:09:46.730  pt2
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.730  [2024-12-16 11:31:12.755682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:09:46.730  [2024-12-16 11:31:12.755826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:46.730  [2024-12-16 11:31:12.755854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:09:46.730  [2024-12-16 11:31:12.755864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:46.730  [2024-12-16 11:31:12.756264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:46.730  [2024-12-16 11:31:12.756285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:09:46.730  [2024-12-16 11:31:12.756355] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:09:46.730  [2024-12-16 11:31:12.756375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:09:46.730  [2024-12-16 11:31:12.756480] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:46.730  [2024-12-16 11:31:12.756490] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:09:46.730  [2024-12-16 11:31:12.756774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:09:46.730  [2024-12-16 11:31:12.756895] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:46.730  [2024-12-16 11:31:12.756909] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:09:46.730  [2024-12-16 11:31:12.757021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:46.730  pt3
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:46.730   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:46.730    11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:46.730    11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:46.730    11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:46.730    11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:46.730    11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:46.989   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:46.989    "name": "raid_bdev1",
00:09:46.989    "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:46.989    "strip_size_kb": 64,
00:09:46.989    "state": "online",
00:09:46.989    "raid_level": "raid0",
00:09:46.989    "superblock": true,
00:09:46.989    "num_base_bdevs": 3,
00:09:46.989    "num_base_bdevs_discovered": 3,
00:09:46.989    "num_base_bdevs_operational": 3,
00:09:46.989    "base_bdevs_list": [
00:09:46.989      {
00:09:46.989        "name": "pt1",
00:09:46.989        "uuid": "00000000-0000-0000-0000-000000000001",
00:09:46.989        "is_configured": true,
00:09:46.989        "data_offset": 2048,
00:09:46.989        "data_size": 63488
00:09:46.989      },
00:09:46.989      {
00:09:46.989        "name": "pt2",
00:09:46.989        "uuid": "00000000-0000-0000-0000-000000000002",
00:09:46.989        "is_configured": true,
00:09:46.989        "data_offset": 2048,
00:09:46.989        "data_size": 63488
00:09:46.989      },
00:09:46.989      {
00:09:46.989        "name": "pt3",
00:09:46.989        "uuid": "00000000-0000-0000-0000-000000000003",
00:09:46.989        "is_configured": true,
00:09:46.989        "data_offset": 2048,
00:09:46.989        "data_size": 63488
00:09:46.989      }
00:09:46.989    ]
00:09:46.989  }'
00:09:46.989   11:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:46.989   11:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:47.248   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:09:47.248   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:09:47.248   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:47.248   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:47.248   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:47.248   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:47.248    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:47.248    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:47.248    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.248    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:47.248  [2024-12-16 11:31:13.279749] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:47.248    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.248   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:47.248    "name": "raid_bdev1",
00:09:47.248    "aliases": [
00:09:47.248      "ba53b4c3-173c-4a90-8ee9-22533041678d"
00:09:47.248    ],
00:09:47.248    "product_name": "Raid Volume",
00:09:47.248    "block_size": 512,
00:09:47.248    "num_blocks": 190464,
00:09:47.248    "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:47.248    "assigned_rate_limits": {
00:09:47.248      "rw_ios_per_sec": 0,
00:09:47.248      "rw_mbytes_per_sec": 0,
00:09:47.248      "r_mbytes_per_sec": 0,
00:09:47.248      "w_mbytes_per_sec": 0
00:09:47.248    },
00:09:47.248    "claimed": false,
00:09:47.248    "zoned": false,
00:09:47.248    "supported_io_types": {
00:09:47.248      "read": true,
00:09:47.248      "write": true,
00:09:47.248      "unmap": true,
00:09:47.248      "flush": true,
00:09:47.248      "reset": true,
00:09:47.248      "nvme_admin": false,
00:09:47.248      "nvme_io": false,
00:09:47.248      "nvme_io_md": false,
00:09:47.248      "write_zeroes": true,
00:09:47.248      "zcopy": false,
00:09:47.248      "get_zone_info": false,
00:09:47.248      "zone_management": false,
00:09:47.248      "zone_append": false,
00:09:47.248      "compare": false,
00:09:47.248      "compare_and_write": false,
00:09:47.248      "abort": false,
00:09:47.248      "seek_hole": false,
00:09:47.248      "seek_data": false,
00:09:47.248      "copy": false,
00:09:47.248      "nvme_iov_md": false
00:09:47.248    },
00:09:47.248    "memory_domains": [
00:09:47.248      {
00:09:47.248        "dma_device_id": "system",
00:09:47.248        "dma_device_type": 1
00:09:47.248      },
00:09:47.248      {
00:09:47.248        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:47.248        "dma_device_type": 2
00:09:47.248      },
00:09:47.248      {
00:09:47.248        "dma_device_id": "system",
00:09:47.248        "dma_device_type": 1
00:09:47.248      },
00:09:47.248      {
00:09:47.248        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:47.248        "dma_device_type": 2
00:09:47.248      },
00:09:47.248      {
00:09:47.248        "dma_device_id": "system",
00:09:47.248        "dma_device_type": 1
00:09:47.248      },
00:09:47.248      {
00:09:47.248        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:47.248        "dma_device_type": 2
00:09:47.248      }
00:09:47.248    ],
00:09:47.248    "driver_specific": {
00:09:47.248      "raid": {
00:09:47.248        "uuid": "ba53b4c3-173c-4a90-8ee9-22533041678d",
00:09:47.248        "strip_size_kb": 64,
00:09:47.248        "state": "online",
00:09:47.248        "raid_level": "raid0",
00:09:47.249        "superblock": true,
00:09:47.249        "num_base_bdevs": 3,
00:09:47.249        "num_base_bdevs_discovered": 3,
00:09:47.249        "num_base_bdevs_operational": 3,
00:09:47.249        "base_bdevs_list": [
00:09:47.249          {
00:09:47.249            "name": "pt1",
00:09:47.249            "uuid": "00000000-0000-0000-0000-000000000001",
00:09:47.249            "is_configured": true,
00:09:47.249            "data_offset": 2048,
00:09:47.249            "data_size": 63488
00:09:47.249          },
00:09:47.249          {
00:09:47.249            "name": "pt2",
00:09:47.249            "uuid": "00000000-0000-0000-0000-000000000002",
00:09:47.249            "is_configured": true,
00:09:47.249            "data_offset": 2048,
00:09:47.249            "data_size": 63488
00:09:47.249          },
00:09:47.249          {
00:09:47.249            "name": "pt3",
00:09:47.249            "uuid": "00000000-0000-0000-0000-000000000003",
00:09:47.249            "is_configured": true,
00:09:47.249            "data_offset": 2048,
00:09:47.249            "data_size": 63488
00:09:47.249          }
00:09:47.249        ]
00:09:47.249      }
00:09:47.249    }
00:09:47.249  }'
00:09:47.249    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:09:47.508  pt2
00:09:47.508  pt3'
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:47.508   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:47.508    11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:09:47.508  [2024-12-16 11:31:13.563210] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:47.769    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.769   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ba53b4c3-173c-4a90-8ee9-22533041678d '!=' ba53b4c3-173c-4a90-8ee9-22533041678d ']'
00:09:47.769   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76527
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76527 ']'
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76527
00:09:47.770    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:47.770    11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76527
00:09:47.770  killing process with pid 76527
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76527'
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76527
00:09:47.770  [2024-12-16 11:31:13.650204] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:47.770  [2024-12-16 11:31:13.650312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:47.770   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76527
00:09:47.770  [2024-12-16 11:31:13.650389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:47.770  [2024-12-16 11:31:13.650401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:09:47.770  [2024-12-16 11:31:13.686279] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:48.034  ************************************
00:09:48.034  END TEST raid_superblock_test
00:09:48.034  ************************************
00:09:48.034   11:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:09:48.034  
00:09:48.034  real	0m4.353s
00:09:48.034  user	0m6.894s
00:09:48.034  sys	0m0.974s
00:09:48.034   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:48.034   11:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:09:48.034   11:31:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read
00:09:48.034   11:31:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:48.034   11:31:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:48.034   11:31:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:48.034  ************************************
00:09:48.034  START TEST raid_read_error_test
00:09:48.034  ************************************
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']'
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:09:48.034    11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ffkaKg8GhM
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76769
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76769
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76769 ']'
00:09:48.034  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:48.034   11:31:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:48.293  [2024-12-16 11:31:14.129462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:48.293  [2024-12-16 11:31:14.129639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76769 ]
00:09:48.293  [2024-12-16 11:31:14.279467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:48.293  [2024-12-16 11:31:14.332737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:48.552  [2024-12-16 11:31:14.379074] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:48.552  [2024-12-16 11:31:14.379119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.122  BaseBdev1_malloc
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.122  true
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.122  [2024-12-16 11:31:15.123700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:09:49.122  [2024-12-16 11:31:15.123788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:49.122  [2024-12-16 11:31:15.123823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:09:49.122  [2024-12-16 11:31:15.123835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:49.122  [2024-12-16 11:31:15.126702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:49.122  [2024-12-16 11:31:15.126794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:09:49.122  BaseBdev1
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.122  BaseBdev2_malloc
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.122  true
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.122  [2024-12-16 11:31:15.172461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:09:49.122  [2024-12-16 11:31:15.172620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:49.122  [2024-12-16 11:31:15.172647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:09:49.122  [2024-12-16 11:31:15.172658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:49.122  [2024-12-16 11:31:15.175120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:49.122  [2024-12-16 11:31:15.175166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:09:49.122  BaseBdev2
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.122   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.380  BaseBdev3_malloc
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.380  true
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.380   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.380  [2024-12-16 11:31:15.213949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:09:49.381  [2024-12-16 11:31:15.214020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:49.381  [2024-12-16 11:31:15.214044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:09:49.381  [2024-12-16 11:31:15.214055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:49.381  [2024-12-16 11:31:15.216555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:49.381  [2024-12-16 11:31:15.216598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:09:49.381  BaseBdev3
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.381  [2024-12-16 11:31:15.226009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:49.381  [2024-12-16 11:31:15.228193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:49.381  [2024-12-16 11:31:15.228394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:49.381  [2024-12-16 11:31:15.228631] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:09:49.381  [2024-12-16 11:31:15.228676] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:09:49.381  [2024-12-16 11:31:15.228971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:09:49.381  [2024-12-16 11:31:15.229113] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:09:49.381  [2024-12-16 11:31:15.229124] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:09:49.381  [2024-12-16 11:31:15.229254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:49.381    11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:49.381    11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:49.381    11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.381    11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:49.381    11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:49.381    "name": "raid_bdev1",
00:09:49.381    "uuid": "151b28fc-6342-413a-be8f-46383091c7fa",
00:09:49.381    "strip_size_kb": 64,
00:09:49.381    "state": "online",
00:09:49.381    "raid_level": "raid0",
00:09:49.381    "superblock": true,
00:09:49.381    "num_base_bdevs": 3,
00:09:49.381    "num_base_bdevs_discovered": 3,
00:09:49.381    "num_base_bdevs_operational": 3,
00:09:49.381    "base_bdevs_list": [
00:09:49.381      {
00:09:49.381        "name": "BaseBdev1",
00:09:49.381        "uuid": "d2dfdee0-151f-561a-acf3-0b7b995fb50d",
00:09:49.381        "is_configured": true,
00:09:49.381        "data_offset": 2048,
00:09:49.381        "data_size": 63488
00:09:49.381      },
00:09:49.381      {
00:09:49.381        "name": "BaseBdev2",
00:09:49.381        "uuid": "e767d421-3910-57d2-8e93-2222598feae6",
00:09:49.381        "is_configured": true,
00:09:49.381        "data_offset": 2048,
00:09:49.381        "data_size": 63488
00:09:49.381      },
00:09:49.381      {
00:09:49.381        "name": "BaseBdev3",
00:09:49.381        "uuid": "6e30d1cb-6751-50c0-94a2-e69f80ab5ca4",
00:09:49.381        "is_configured": true,
00:09:49.381        "data_offset": 2048,
00:09:49.381        "data_size": 63488
00:09:49.381      }
00:09:49.381    ]
00:09:49.381  }'
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:49.381   11:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:49.640   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:09:49.640   11:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:09:49.899  [2024-12-16 11:31:15.813512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]]
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:50.836    11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:50.836    11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:50.836    11:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:50.836    11:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:50.836    11:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:50.836    "name": "raid_bdev1",
00:09:50.836    "uuid": "151b28fc-6342-413a-be8f-46383091c7fa",
00:09:50.836    "strip_size_kb": 64,
00:09:50.836    "state": "online",
00:09:50.836    "raid_level": "raid0",
00:09:50.836    "superblock": true,
00:09:50.836    "num_base_bdevs": 3,
00:09:50.836    "num_base_bdevs_discovered": 3,
00:09:50.836    "num_base_bdevs_operational": 3,
00:09:50.836    "base_bdevs_list": [
00:09:50.836      {
00:09:50.836        "name": "BaseBdev1",
00:09:50.836        "uuid": "d2dfdee0-151f-561a-acf3-0b7b995fb50d",
00:09:50.836        "is_configured": true,
00:09:50.836        "data_offset": 2048,
00:09:50.836        "data_size": 63488
00:09:50.836      },
00:09:50.836      {
00:09:50.836        "name": "BaseBdev2",
00:09:50.836        "uuid": "e767d421-3910-57d2-8e93-2222598feae6",
00:09:50.836        "is_configured": true,
00:09:50.836        "data_offset": 2048,
00:09:50.836        "data_size": 63488
00:09:50.836      },
00:09:50.836      {
00:09:50.836        "name": "BaseBdev3",
00:09:50.836        "uuid": "6e30d1cb-6751-50c0-94a2-e69f80ab5ca4",
00:09:50.836        "is_configured": true,
00:09:50.836        "data_offset": 2048,
00:09:50.836        "data_size": 63488
00:09:50.836      }
00:09:50.836    ]
00:09:50.836  }'
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:50.836   11:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:51.404  [2024-12-16 11:31:17.170224] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:51.404  [2024-12-16 11:31:17.170374] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:51.404  [2024-12-16 11:31:17.173490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:51.404  [2024-12-16 11:31:17.173626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:51.404  [2024-12-16 11:31:17.173698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:51.404  [2024-12-16 11:31:17.173766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:09:51.404  {
00:09:51.404    "results": [
00:09:51.404      {
00:09:51.404        "job": "raid_bdev1",
00:09:51.404        "core_mask": "0x1",
00:09:51.404        "workload": "randrw",
00:09:51.404        "percentage": 50,
00:09:51.404        "status": "finished",
00:09:51.404        "queue_depth": 1,
00:09:51.404        "io_size": 131072,
00:09:51.404        "runtime": 1.357296,
00:09:51.404        "iops": 14128.82672607891,
00:09:51.404        "mibps": 1766.1033407598638,
00:09:51.404        "io_failed": 1,
00:09:51.404        "io_timeout": 0,
00:09:51.404        "avg_latency_us": 97.81379009153957,
00:09:51.404        "min_latency_us": 19.227947598253277,
00:09:51.404        "max_latency_us": 1788.646288209607
00:09:51.404      }
00:09:51.404    ],
00:09:51.404    "core_count": 1
00:09:51.404  }
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76769
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76769 ']'
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76769
00:09:51.404    11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:51.404    11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76769
00:09:51.404  killing process with pid 76769
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76769'
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76769
00:09:51.404   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76769
00:09:51.404  [2024-12-16 11:31:17.222241] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:51.404  [2024-12-16 11:31:17.249854] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:51.663    11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ffkaKg8GhM
00:09:51.663    11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:09:51.664    11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:09:51.664   11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74
00:09:51.664   11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0
00:09:51.664   11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:51.664   11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:09:51.664   11:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]]
00:09:51.664  
00:09:51.664  real	0m3.498s
00:09:51.664  user	0m4.511s
00:09:51.664  sys	0m0.585s
00:09:51.664   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:51.664   11:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:51.664  ************************************
00:09:51.664  END TEST raid_read_error_test
00:09:51.664  ************************************
00:09:51.664   11:31:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write
00:09:51.664   11:31:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:51.664   11:31:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:51.664   11:31:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:51.664  ************************************
00:09:51.664  START TEST raid_write_error_test
00:09:51.664  ************************************
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']'
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:09:51.664    11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.06XGdcwrMy
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76898
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76898
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76898 ']'
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:51.664  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:51.664   11:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:51.664  [2024-12-16 11:31:17.702804] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:51.664  [2024-12-16 11:31:17.702943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76898 ]
00:09:51.924  [2024-12-16 11:31:17.877126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:51.924  [2024-12-16 11:31:17.932302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:51.924  [2024-12-16 11:31:17.977896] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:51.924  [2024-12-16 11:31:17.977938] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.860  BaseBdev1_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.860  true
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.860  [2024-12-16 11:31:18.710417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:09:52.860  [2024-12-16 11:31:18.710569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:52.860  [2024-12-16 11:31:18.710656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:09:52.860  [2024-12-16 11:31:18.710704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:52.860  [2024-12-16 11:31:18.713274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:52.860  [2024-12-16 11:31:18.713374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:09:52.860  BaseBdev1
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.860  BaseBdev2_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.860  true
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.860  [2024-12-16 11:31:18.760486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:09:52.860  [2024-12-16 11:31:18.760625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:52.860  [2024-12-16 11:31:18.760688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:09:52.860  [2024-12-16 11:31:18.760731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:52.860  [2024-12-16 11:31:18.763203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:52.860  [2024-12-16 11:31:18.763302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:09:52.860  BaseBdev2
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.860  BaseBdev3_malloc
00:09:52.860   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.861  true
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.861  [2024-12-16 11:31:18.801920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:09:52.861  [2024-12-16 11:31:18.801985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:52.861  [2024-12-16 11:31:18.802008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:09:52.861  [2024-12-16 11:31:18.802018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:52.861  [2024-12-16 11:31:18.804484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:52.861  [2024-12-16 11:31:18.804615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:09:52.861  BaseBdev3
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.861  [2024-12-16 11:31:18.813951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:52.861  [2024-12-16 11:31:18.816113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:52.861  [2024-12-16 11:31:18.816272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:52.861  [2024-12-16 11:31:18.816485] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:09:52.861  [2024-12-16 11:31:18.816504] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:09:52.861  [2024-12-16 11:31:18.816817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:09:52.861  [2024-12-16 11:31:18.816963] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:09:52.861  [2024-12-16 11:31:18.816973] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:09:52.861  [2024-12-16 11:31:18.817120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:52.861    11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:52.861    11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:52.861    11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:52.861    11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:52.861    11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:52.861    "name": "raid_bdev1",
00:09:52.861    "uuid": "926fda86-2254-46c8-8c04-51eacc63c9a2",
00:09:52.861    "strip_size_kb": 64,
00:09:52.861    "state": "online",
00:09:52.861    "raid_level": "raid0",
00:09:52.861    "superblock": true,
00:09:52.861    "num_base_bdevs": 3,
00:09:52.861    "num_base_bdevs_discovered": 3,
00:09:52.861    "num_base_bdevs_operational": 3,
00:09:52.861    "base_bdevs_list": [
00:09:52.861      {
00:09:52.861        "name": "BaseBdev1",
00:09:52.861        "uuid": "f6cb8da6-997e-52e0-824a-12fc99d92cc7",
00:09:52.861        "is_configured": true,
00:09:52.861        "data_offset": 2048,
00:09:52.861        "data_size": 63488
00:09:52.861      },
00:09:52.861      {
00:09:52.861        "name": "BaseBdev2",
00:09:52.861        "uuid": "a1c3b4ef-3aa2-57e8-92af-1bab194a7c94",
00:09:52.861        "is_configured": true,
00:09:52.861        "data_offset": 2048,
00:09:52.861        "data_size": 63488
00:09:52.861      },
00:09:52.861      {
00:09:52.861        "name": "BaseBdev3",
00:09:52.861        "uuid": "cfe9d595-41e7-535e-87b3-4250d6edcd70",
00:09:52.861        "is_configured": true,
00:09:52.861        "data_offset": 2048,
00:09:52.861        "data_size": 63488
00:09:52.861      }
00:09:52.861    ]
00:09:52.861  }'
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:52.861   11:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:53.430   11:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:09:53.430   11:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:09:53.430  [2024-12-16 11:31:19.425513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]]
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:54.367   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:54.367    11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:54.367    11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:09:54.367    11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:54.367    11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:54.368    11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:54.368   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:54.368    "name": "raid_bdev1",
00:09:54.368    "uuid": "926fda86-2254-46c8-8c04-51eacc63c9a2",
00:09:54.368    "strip_size_kb": 64,
00:09:54.368    "state": "online",
00:09:54.368    "raid_level": "raid0",
00:09:54.368    "superblock": true,
00:09:54.368    "num_base_bdevs": 3,
00:09:54.368    "num_base_bdevs_discovered": 3,
00:09:54.368    "num_base_bdevs_operational": 3,
00:09:54.368    "base_bdevs_list": [
00:09:54.368      {
00:09:54.368        "name": "BaseBdev1",
00:09:54.368        "uuid": "f6cb8da6-997e-52e0-824a-12fc99d92cc7",
00:09:54.368        "is_configured": true,
00:09:54.368        "data_offset": 2048,
00:09:54.368        "data_size": 63488
00:09:54.368      },
00:09:54.368      {
00:09:54.368        "name": "BaseBdev2",
00:09:54.368        "uuid": "a1c3b4ef-3aa2-57e8-92af-1bab194a7c94",
00:09:54.368        "is_configured": true,
00:09:54.368        "data_offset": 2048,
00:09:54.368        "data_size": 63488
00:09:54.368      },
00:09:54.368      {
00:09:54.368        "name": "BaseBdev3",
00:09:54.368        "uuid": "cfe9d595-41e7-535e-87b3-4250d6edcd70",
00:09:54.368        "is_configured": true,
00:09:54.368        "data_offset": 2048,
00:09:54.368        "data_size": 63488
00:09:54.368      }
00:09:54.368    ]
00:09:54.368  }'
00:09:54.368   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:54.368   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:54.937  [2024-12-16 11:31:20.827440] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:09:54.937  [2024-12-16 11:31:20.827479] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:54.937  [2024-12-16 11:31:20.830621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:54.937  [2024-12-16 11:31:20.830718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:54.937  [2024-12-16 11:31:20.830792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:09:54.937  [2024-12-16 11:31:20.830847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:09:54.937  {
00:09:54.937    "results": [
00:09:54.937      {
00:09:54.937        "job": "raid_bdev1",
00:09:54.937        "core_mask": "0x1",
00:09:54.937        "workload": "randrw",
00:09:54.937        "percentage": 50,
00:09:54.937        "status": "finished",
00:09:54.937        "queue_depth": 1,
00:09:54.937        "io_size": 131072,
00:09:54.937        "runtime": 1.4025,
00:09:54.937        "iops": 13739.037433155081,
00:09:54.937        "mibps": 1717.3796791443851,
00:09:54.937        "io_failed": 1,
00:09:54.937        "io_timeout": 0,
00:09:54.937        "avg_latency_us": 100.59247639270036,
00:09:54.937        "min_latency_us": 33.98427947598253,
00:09:54.937        "max_latency_us": 1738.564192139738
00:09:54.937      }
00:09:54.937    ],
00:09:54.937    "core_count": 1
00:09:54.937  }
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76898
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76898 ']'
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76898
00:09:54.937    11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:09:54.937    11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76898
00:09:54.937  killing process with pid 76898
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76898'
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76898
00:09:54.937  [2024-12-16 11:31:20.879054] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:09:54.937   11:31:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76898
00:09:54.937  [2024-12-16 11:31:20.906099] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:09:55.196    11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.06XGdcwrMy
00:09:55.196    11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:09:55.196    11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:09:55.196   11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71
00:09:55.196   11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0
00:09:55.196  ************************************
00:09:55.196  END TEST raid_write_error_test
00:09:55.196  ************************************
00:09:55.196   11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:55.196   11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:09:55.196   11:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]]
00:09:55.196  
00:09:55.196  real	0m3.583s
00:09:55.196  user	0m4.633s
00:09:55.196  sys	0m0.631s
00:09:55.196   11:31:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:09:55.196   11:31:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:09:55.196   11:31:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:09:55.196   11:31:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false
00:09:55.197   11:31:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:09:55.197   11:31:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:09:55.197   11:31:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:09:55.197  ************************************
00:09:55.197  START TEST raid_state_function_test
00:09:55.197  ************************************
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:09:55.197    11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']'
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:09:55.197   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77031
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77031'
00:09:55.456  Process raid pid: 77031
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77031
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77031 ']'
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:55.456  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:09:55.456   11:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:55.456  [2024-12-16 11:31:21.346570] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:09:55.456  [2024-12-16 11:31:21.346782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:55.456  [2024-12-16 11:31:21.512467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:55.717  [2024-12-16 11:31:21.564211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:09:55.717  [2024-12-16 11:31:21.609611] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:55.717  [2024-12-16 11:31:21.609739] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.285  [2024-12-16 11:31:22.276172] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:56.285  [2024-12-16 11:31:22.276300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:56.285  [2024-12-16 11:31:22.276369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:56.285  [2024-12-16 11:31:22.276418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:56.285  [2024-12-16 11:31:22.276452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:56.285  [2024-12-16 11:31:22.276483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:56.285    11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:56.285    11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:56.285    11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.285    11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.285    11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:56.285    "name": "Existed_Raid",
00:09:56.285    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:56.285    "strip_size_kb": 64,
00:09:56.285    "state": "configuring",
00:09:56.285    "raid_level": "concat",
00:09:56.285    "superblock": false,
00:09:56.285    "num_base_bdevs": 3,
00:09:56.285    "num_base_bdevs_discovered": 0,
00:09:56.285    "num_base_bdevs_operational": 3,
00:09:56.285    "base_bdevs_list": [
00:09:56.285      {
00:09:56.285        "name": "BaseBdev1",
00:09:56.285        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:56.285        "is_configured": false,
00:09:56.285        "data_offset": 0,
00:09:56.285        "data_size": 0
00:09:56.285      },
00:09:56.285      {
00:09:56.285        "name": "BaseBdev2",
00:09:56.285        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:56.285        "is_configured": false,
00:09:56.285        "data_offset": 0,
00:09:56.285        "data_size": 0
00:09:56.285      },
00:09:56.285      {
00:09:56.285        "name": "BaseBdev3",
00:09:56.285        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:56.285        "is_configured": false,
00:09:56.285        "data_offset": 0,
00:09:56.285        "data_size": 0
00:09:56.285      }
00:09:56.285    ]
00:09:56.285  }'
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:56.285   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.874  [2024-12-16 11:31:22.791408] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:56.874  [2024-12-16 11:31:22.791514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.874  [2024-12-16 11:31:22.799434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:09:56.874  [2024-12-16 11:31:22.799530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:09:56.874  [2024-12-16 11:31:22.799580] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:56.874  [2024-12-16 11:31:22.799618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:56.874  [2024-12-16 11:31:22.799652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:56.874  [2024-12-16 11:31:22.799680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.874  [2024-12-16 11:31:22.816991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:56.874  BaseBdev1
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.874  [
00:09:56.874  {
00:09:56.874  "name": "BaseBdev1",
00:09:56.874  "aliases": [
00:09:56.874  "ed7d0b4a-3511-4f37-97c9-f7f078c747c4"
00:09:56.874  ],
00:09:56.874  "product_name": "Malloc disk",
00:09:56.874  "block_size": 512,
00:09:56.874  "num_blocks": 65536,
00:09:56.874  "uuid": "ed7d0b4a-3511-4f37-97c9-f7f078c747c4",
00:09:56.874  "assigned_rate_limits": {
00:09:56.874  "rw_ios_per_sec": 0,
00:09:56.874  "rw_mbytes_per_sec": 0,
00:09:56.874  "r_mbytes_per_sec": 0,
00:09:56.874  "w_mbytes_per_sec": 0
00:09:56.874  },
00:09:56.874  "claimed": true,
00:09:56.874  "claim_type": "exclusive_write",
00:09:56.874  "zoned": false,
00:09:56.874  "supported_io_types": {
00:09:56.874  "read": true,
00:09:56.874  "write": true,
00:09:56.874  "unmap": true,
00:09:56.874  "flush": true,
00:09:56.874  "reset": true,
00:09:56.874  "nvme_admin": false,
00:09:56.874  "nvme_io": false,
00:09:56.874  "nvme_io_md": false,
00:09:56.874  "write_zeroes": true,
00:09:56.874  "zcopy": true,
00:09:56.874  "get_zone_info": false,
00:09:56.874  "zone_management": false,
00:09:56.874  "zone_append": false,
00:09:56.874  "compare": false,
00:09:56.874  "compare_and_write": false,
00:09:56.874  "abort": true,
00:09:56.874  "seek_hole": false,
00:09:56.874  "seek_data": false,
00:09:56.874  "copy": true,
00:09:56.874  "nvme_iov_md": false
00:09:56.874  },
00:09:56.874  "memory_domains": [
00:09:56.874  {
00:09:56.874  "dma_device_id": "system",
00:09:56.874  "dma_device_type": 1
00:09:56.874  },
00:09:56.874  {
00:09:56.874  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:56.874  "dma_device_type": 2
00:09:56.874  }
00:09:56.874  ],
00:09:56.874  "driver_specific": {}
00:09:56.874  }
00:09:56.874  ]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:56.874    11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:56.874    11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:56.874    11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:56.874    11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:56.874    11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:56.874    "name": "Existed_Raid",
00:09:56.874    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:56.874    "strip_size_kb": 64,
00:09:56.874    "state": "configuring",
00:09:56.874    "raid_level": "concat",
00:09:56.874    "superblock": false,
00:09:56.874    "num_base_bdevs": 3,
00:09:56.874    "num_base_bdevs_discovered": 1,
00:09:56.874    "num_base_bdevs_operational": 3,
00:09:56.874    "base_bdevs_list": [
00:09:56.874      {
00:09:56.874        "name": "BaseBdev1",
00:09:56.874        "uuid": "ed7d0b4a-3511-4f37-97c9-f7f078c747c4",
00:09:56.874        "is_configured": true,
00:09:56.874        "data_offset": 0,
00:09:56.874        "data_size": 65536
00:09:56.874      },
00:09:56.874      {
00:09:56.874        "name": "BaseBdev2",
00:09:56.874        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:56.874        "is_configured": false,
00:09:56.874        "data_offset": 0,
00:09:56.874        "data_size": 0
00:09:56.874      },
00:09:56.874      {
00:09:56.874        "name": "BaseBdev3",
00:09:56.874        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:56.874        "is_configured": false,
00:09:56.874        "data_offset": 0,
00:09:56.874        "data_size": 0
00:09:56.874      }
00:09:56.874    ]
00:09:56.874  }'
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:56.874   11:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:57.441  [2024-12-16 11:31:23.352203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:09:57.441  [2024-12-16 11:31:23.352263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:57.441  [2024-12-16 11:31:23.364227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:09:57.441  [2024-12-16 11:31:23.366417] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:09:57.441  [2024-12-16 11:31:23.366518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:09:57.441  [2024-12-16 11:31:23.366547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:09:57.441  [2024-12-16 11:31:23.366561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:57.441   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:57.442   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:57.442   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:57.442   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:57.442    11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:57.442    11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:57.442    11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:57.442    11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:57.442    11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:57.442   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:57.442    "name": "Existed_Raid",
00:09:57.442    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:57.442    "strip_size_kb": 64,
00:09:57.442    "state": "configuring",
00:09:57.442    "raid_level": "concat",
00:09:57.442    "superblock": false,
00:09:57.442    "num_base_bdevs": 3,
00:09:57.442    "num_base_bdevs_discovered": 1,
00:09:57.442    "num_base_bdevs_operational": 3,
00:09:57.442    "base_bdevs_list": [
00:09:57.442      {
00:09:57.442        "name": "BaseBdev1",
00:09:57.442        "uuid": "ed7d0b4a-3511-4f37-97c9-f7f078c747c4",
00:09:57.442        "is_configured": true,
00:09:57.442        "data_offset": 0,
00:09:57.442        "data_size": 65536
00:09:57.442      },
00:09:57.442      {
00:09:57.442        "name": "BaseBdev2",
00:09:57.442        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:57.442        "is_configured": false,
00:09:57.442        "data_offset": 0,
00:09:57.442        "data_size": 0
00:09:57.442      },
00:09:57.442      {
00:09:57.442        "name": "BaseBdev3",
00:09:57.442        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:57.442        "is_configured": false,
00:09:57.442        "data_offset": 0,
00:09:57.442        "data_size": 0
00:09:57.442      }
00:09:57.442    ]
00:09:57.442  }'
00:09:57.442   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:57.442   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.010  [2024-12-16 11:31:23.851202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:09:58.010  BaseBdev2
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.010  [
00:09:58.010  {
00:09:58.010  "name": "BaseBdev2",
00:09:58.010  "aliases": [
00:09:58.010  "fc0a856d-16a0-4886-a546-5387f754cac8"
00:09:58.010  ],
00:09:58.010  "product_name": "Malloc disk",
00:09:58.010  "block_size": 512,
00:09:58.010  "num_blocks": 65536,
00:09:58.010  "uuid": "fc0a856d-16a0-4886-a546-5387f754cac8",
00:09:58.010  "assigned_rate_limits": {
00:09:58.010  "rw_ios_per_sec": 0,
00:09:58.010  "rw_mbytes_per_sec": 0,
00:09:58.010  "r_mbytes_per_sec": 0,
00:09:58.010  "w_mbytes_per_sec": 0
00:09:58.010  },
00:09:58.010  "claimed": true,
00:09:58.010  "claim_type": "exclusive_write",
00:09:58.010  "zoned": false,
00:09:58.010  "supported_io_types": {
00:09:58.010  "read": true,
00:09:58.010  "write": true,
00:09:58.010  "unmap": true,
00:09:58.010  "flush": true,
00:09:58.010  "reset": true,
00:09:58.010  "nvme_admin": false,
00:09:58.010  "nvme_io": false,
00:09:58.010  "nvme_io_md": false,
00:09:58.010  "write_zeroes": true,
00:09:58.010  "zcopy": true,
00:09:58.010  "get_zone_info": false,
00:09:58.010  "zone_management": false,
00:09:58.010  "zone_append": false,
00:09:58.010  "compare": false,
00:09:58.010  "compare_and_write": false,
00:09:58.010  "abort": true,
00:09:58.010  "seek_hole": false,
00:09:58.010  "seek_data": false,
00:09:58.010  "copy": true,
00:09:58.010  "nvme_iov_md": false
00:09:58.010  },
00:09:58.010  "memory_domains": [
00:09:58.010  {
00:09:58.010  "dma_device_id": "system",
00:09:58.010  "dma_device_type": 1
00:09:58.010  },
00:09:58.010  {
00:09:58.010  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:58.010  "dma_device_type": 2
00:09:58.010  }
00:09:58.010  ],
00:09:58.010  "driver_specific": {}
00:09:58.010  }
00:09:58.010  ]
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:58.010    11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:58.010    11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:58.010    11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.010    11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.010    11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.010   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:58.011    "name": "Existed_Raid",
00:09:58.011    "uuid": "00000000-0000-0000-0000-000000000000",
00:09:58.011    "strip_size_kb": 64,
00:09:58.011    "state": "configuring",
00:09:58.011    "raid_level": "concat",
00:09:58.011    "superblock": false,
00:09:58.011    "num_base_bdevs": 3,
00:09:58.011    "num_base_bdevs_discovered": 2,
00:09:58.011    "num_base_bdevs_operational": 3,
00:09:58.011    "base_bdevs_list": [
00:09:58.011      {
00:09:58.011        "name": "BaseBdev1",
00:09:58.011        "uuid": "ed7d0b4a-3511-4f37-97c9-f7f078c747c4",
00:09:58.011        "is_configured": true,
00:09:58.011        "data_offset": 0,
00:09:58.011        "data_size": 65536
00:09:58.011      },
00:09:58.011      {
00:09:58.011        "name": "BaseBdev2",
00:09:58.011        "uuid": "fc0a856d-16a0-4886-a546-5387f754cac8",
00:09:58.011        "is_configured": true,
00:09:58.011        "data_offset": 0,
00:09:58.011        "data_size": 65536
00:09:58.011      },
00:09:58.011      {
00:09:58.011        "name": "BaseBdev3",
00:09:58.011        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:58.011        "is_configured": false,
00:09:58.011        "data_offset": 0,
00:09:58.011        "data_size": 0
00:09:58.011      }
00:09:58.011    ]
00:09:58.011  }'
00:09:58.011   11:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:58.011   11:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.579  [2024-12-16 11:31:24.377788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:09:58.579  [2024-12-16 11:31:24.377839] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:09:58.579  [2024-12-16 11:31:24.377851] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:09:58.579  [2024-12-16 11:31:24.378193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:09:58.579  [2024-12-16 11:31:24.378343] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:09:58.579  [2024-12-16 11:31:24.378355] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:09:58.579  [2024-12-16 11:31:24.378611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:09:58.579  BaseBdev3
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.579  [
00:09:58.579  {
00:09:58.579  "name": "BaseBdev3",
00:09:58.579  "aliases": [
00:09:58.579  "afa6f20d-158d-42a0-84fa-1b85b95d2cf1"
00:09:58.579  ],
00:09:58.579  "product_name": "Malloc disk",
00:09:58.579  "block_size": 512,
00:09:58.579  "num_blocks": 65536,
00:09:58.579  "uuid": "afa6f20d-158d-42a0-84fa-1b85b95d2cf1",
00:09:58.579  "assigned_rate_limits": {
00:09:58.579  "rw_ios_per_sec": 0,
00:09:58.579  "rw_mbytes_per_sec": 0,
00:09:58.579  "r_mbytes_per_sec": 0,
00:09:58.579  "w_mbytes_per_sec": 0
00:09:58.579  },
00:09:58.579  "claimed": true,
00:09:58.579  "claim_type": "exclusive_write",
00:09:58.579  "zoned": false,
00:09:58.579  "supported_io_types": {
00:09:58.579  "read": true,
00:09:58.579  "write": true,
00:09:58.579  "unmap": true,
00:09:58.579  "flush": true,
00:09:58.579  "reset": true,
00:09:58.579  "nvme_admin": false,
00:09:58.579  "nvme_io": false,
00:09:58.579  "nvme_io_md": false,
00:09:58.579  "write_zeroes": true,
00:09:58.579  "zcopy": true,
00:09:58.579  "get_zone_info": false,
00:09:58.579  "zone_management": false,
00:09:58.579  "zone_append": false,
00:09:58.579  "compare": false,
00:09:58.579  "compare_and_write": false,
00:09:58.579  "abort": true,
00:09:58.579  "seek_hole": false,
00:09:58.579  "seek_data": false,
00:09:58.579  "copy": true,
00:09:58.579  "nvme_iov_md": false
00:09:58.579  },
00:09:58.579  "memory_domains": [
00:09:58.579  {
00:09:58.579  "dma_device_id": "system",
00:09:58.579  "dma_device_type": 1
00:09:58.579  },
00:09:58.579  {
00:09:58.579  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:58.579  "dma_device_type": 2
00:09:58.579  }
00:09:58.579  ],
00:09:58.579  "driver_specific": {}
00:09:58.579  }
00:09:58.579  ]
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:58.579    11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:58.579    11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:58.579    11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:58.579    11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:58.579    11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:58.579    "name": "Existed_Raid",
00:09:58.579    "uuid": "54744eb8-2324-4564-a87d-3e3ccf88e1c9",
00:09:58.579    "strip_size_kb": 64,
00:09:58.579    "state": "online",
00:09:58.579    "raid_level": "concat",
00:09:58.579    "superblock": false,
00:09:58.579    "num_base_bdevs": 3,
00:09:58.579    "num_base_bdevs_discovered": 3,
00:09:58.579    "num_base_bdevs_operational": 3,
00:09:58.579    "base_bdevs_list": [
00:09:58.579      {
00:09:58.579        "name": "BaseBdev1",
00:09:58.579        "uuid": "ed7d0b4a-3511-4f37-97c9-f7f078c747c4",
00:09:58.579        "is_configured": true,
00:09:58.579        "data_offset": 0,
00:09:58.579        "data_size": 65536
00:09:58.579      },
00:09:58.579      {
00:09:58.579        "name": "BaseBdev2",
00:09:58.579        "uuid": "fc0a856d-16a0-4886-a546-5387f754cac8",
00:09:58.579        "is_configured": true,
00:09:58.579        "data_offset": 0,
00:09:58.579        "data_size": 65536
00:09:58.579      },
00:09:58.579      {
00:09:58.579        "name": "BaseBdev3",
00:09:58.579        "uuid": "afa6f20d-158d-42a0-84fa-1b85b95d2cf1",
00:09:58.579        "is_configured": true,
00:09:58.579        "data_offset": 0,
00:09:58.579        "data_size": 65536
00:09:58.579      }
00:09:58.579    ]
00:09:58.579  }'
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:58.579   11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.147   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:09:59.147   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:09:59.147   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:09:59.147   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:09:59.147   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:09:59.147   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:09:59.147    11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:09:59.147    11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:09:59.147    11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.147    11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.147  [2024-12-16 11:31:24.917369] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:09:59.147    11:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.147   11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:09:59.147    "name": "Existed_Raid",
00:09:59.147    "aliases": [
00:09:59.147      "54744eb8-2324-4564-a87d-3e3ccf88e1c9"
00:09:59.147    ],
00:09:59.147    "product_name": "Raid Volume",
00:09:59.147    "block_size": 512,
00:09:59.147    "num_blocks": 196608,
00:09:59.147    "uuid": "54744eb8-2324-4564-a87d-3e3ccf88e1c9",
00:09:59.147    "assigned_rate_limits": {
00:09:59.147      "rw_ios_per_sec": 0,
00:09:59.147      "rw_mbytes_per_sec": 0,
00:09:59.147      "r_mbytes_per_sec": 0,
00:09:59.147      "w_mbytes_per_sec": 0
00:09:59.147    },
00:09:59.147    "claimed": false,
00:09:59.147    "zoned": false,
00:09:59.147    "supported_io_types": {
00:09:59.147      "read": true,
00:09:59.147      "write": true,
00:09:59.147      "unmap": true,
00:09:59.147      "flush": true,
00:09:59.147      "reset": true,
00:09:59.147      "nvme_admin": false,
00:09:59.147      "nvme_io": false,
00:09:59.147      "nvme_io_md": false,
00:09:59.147      "write_zeroes": true,
00:09:59.147      "zcopy": false,
00:09:59.147      "get_zone_info": false,
00:09:59.147      "zone_management": false,
00:09:59.147      "zone_append": false,
00:09:59.147      "compare": false,
00:09:59.147      "compare_and_write": false,
00:09:59.147      "abort": false,
00:09:59.147      "seek_hole": false,
00:09:59.147      "seek_data": false,
00:09:59.147      "copy": false,
00:09:59.147      "nvme_iov_md": false
00:09:59.147    },
00:09:59.147    "memory_domains": [
00:09:59.147      {
00:09:59.147        "dma_device_id": "system",
00:09:59.147        "dma_device_type": 1
00:09:59.147      },
00:09:59.147      {
00:09:59.147        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:59.147        "dma_device_type": 2
00:09:59.147      },
00:09:59.147      {
00:09:59.147        "dma_device_id": "system",
00:09:59.147        "dma_device_type": 1
00:09:59.147      },
00:09:59.147      {
00:09:59.147        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:59.147        "dma_device_type": 2
00:09:59.147      },
00:09:59.147      {
00:09:59.147        "dma_device_id": "system",
00:09:59.147        "dma_device_type": 1
00:09:59.147      },
00:09:59.147      {
00:09:59.147        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:59.147        "dma_device_type": 2
00:09:59.147      }
00:09:59.147    ],
00:09:59.147    "driver_specific": {
00:09:59.147      "raid": {
00:09:59.147        "uuid": "54744eb8-2324-4564-a87d-3e3ccf88e1c9",
00:09:59.147        "strip_size_kb": 64,
00:09:59.147        "state": "online",
00:09:59.147        "raid_level": "concat",
00:09:59.147        "superblock": false,
00:09:59.147        "num_base_bdevs": 3,
00:09:59.147        "num_base_bdevs_discovered": 3,
00:09:59.147        "num_base_bdevs_operational": 3,
00:09:59.147        "base_bdevs_list": [
00:09:59.147          {
00:09:59.147            "name": "BaseBdev1",
00:09:59.147            "uuid": "ed7d0b4a-3511-4f37-97c9-f7f078c747c4",
00:09:59.147            "is_configured": true,
00:09:59.147            "data_offset": 0,
00:09:59.147            "data_size": 65536
00:09:59.147          },
00:09:59.147          {
00:09:59.147            "name": "BaseBdev2",
00:09:59.147            "uuid": "fc0a856d-16a0-4886-a546-5387f754cac8",
00:09:59.147            "is_configured": true,
00:09:59.147            "data_offset": 0,
00:09:59.147            "data_size": 65536
00:09:59.147          },
00:09:59.147          {
00:09:59.147            "name": "BaseBdev3",
00:09:59.147            "uuid": "afa6f20d-158d-42a0-84fa-1b85b95d2cf1",
00:09:59.147            "is_configured": true,
00:09:59.147            "data_offset": 0,
00:09:59.147            "data_size": 65536
00:09:59.147          }
00:09:59.147        ]
00:09:59.147      }
00:09:59.147    }
00:09:59.147  }'
00:09:59.147    11:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:09:59.147   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:09:59.148  BaseBdev2
00:09:59.148  BaseBdev3'
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:59.148   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.148    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.406  [2024-12-16 11:31:25.220683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:09:59.406  [2024-12-16 11:31:25.220718] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:09:59.406  [2024-12-16 11:31:25.220791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:09:59.406    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:09:59.406    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:59.406    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.406    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.406    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:09:59.406    "name": "Existed_Raid",
00:09:59.406    "uuid": "54744eb8-2324-4564-a87d-3e3ccf88e1c9",
00:09:59.406    "strip_size_kb": 64,
00:09:59.406    "state": "offline",
00:09:59.406    "raid_level": "concat",
00:09:59.406    "superblock": false,
00:09:59.406    "num_base_bdevs": 3,
00:09:59.406    "num_base_bdevs_discovered": 2,
00:09:59.406    "num_base_bdevs_operational": 2,
00:09:59.406    "base_bdevs_list": [
00:09:59.406      {
00:09:59.406        "name": null,
00:09:59.406        "uuid": "00000000-0000-0000-0000-000000000000",
00:09:59.406        "is_configured": false,
00:09:59.406        "data_offset": 0,
00:09:59.406        "data_size": 65536
00:09:59.406      },
00:09:59.406      {
00:09:59.406        "name": "BaseBdev2",
00:09:59.406        "uuid": "fc0a856d-16a0-4886-a546-5387f754cac8",
00:09:59.406        "is_configured": true,
00:09:59.406        "data_offset": 0,
00:09:59.406        "data_size": 65536
00:09:59.406      },
00:09:59.406      {
00:09:59.406        "name": "BaseBdev3",
00:09:59.406        "uuid": "afa6f20d-158d-42a0-84fa-1b85b95d2cf1",
00:09:59.406        "is_configured": true,
00:09:59.406        "data_offset": 0,
00:09:59.406        "data_size": 65536
00:09:59.406      }
00:09:59.406    ]
00:09:59.406  }'
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:09:59.406   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.664   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:09:59.664   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:59.664    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:59.664    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.664    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:59.664    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.664    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.923  [2024-12-16 11:31:25.747828] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.923  [2024-12-16 11:31:25.819527] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:09:59.923  [2024-12-16 11:31:25.819649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:09:59.923   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.923    11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.924  BaseBdev2
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.924  [
00:09:59.924  {
00:09:59.924  "name": "BaseBdev2",
00:09:59.924  "aliases": [
00:09:59.924  "a260e037-708f-4eb6-8b43-fb8070d1f339"
00:09:59.924  ],
00:09:59.924  "product_name": "Malloc disk",
00:09:59.924  "block_size": 512,
00:09:59.924  "num_blocks": 65536,
00:09:59.924  "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:09:59.924  "assigned_rate_limits": {
00:09:59.924  "rw_ios_per_sec": 0,
00:09:59.924  "rw_mbytes_per_sec": 0,
00:09:59.924  "r_mbytes_per_sec": 0,
00:09:59.924  "w_mbytes_per_sec": 0
00:09:59.924  },
00:09:59.924  "claimed": false,
00:09:59.924  "zoned": false,
00:09:59.924  "supported_io_types": {
00:09:59.924  "read": true,
00:09:59.924  "write": true,
00:09:59.924  "unmap": true,
00:09:59.924  "flush": true,
00:09:59.924  "reset": true,
00:09:59.924  "nvme_admin": false,
00:09:59.924  "nvme_io": false,
00:09:59.924  "nvme_io_md": false,
00:09:59.924  "write_zeroes": true,
00:09:59.924  "zcopy": true,
00:09:59.924  "get_zone_info": false,
00:09:59.924  "zone_management": false,
00:09:59.924  "zone_append": false,
00:09:59.924  "compare": false,
00:09:59.924  "compare_and_write": false,
00:09:59.924  "abort": true,
00:09:59.924  "seek_hole": false,
00:09:59.924  "seek_data": false,
00:09:59.924  "copy": true,
00:09:59.924  "nvme_iov_md": false
00:09:59.924  },
00:09:59.924  "memory_domains": [
00:09:59.924  {
00:09:59.924  "dma_device_id": "system",
00:09:59.924  "dma_device_type": 1
00:09:59.924  },
00:09:59.924  {
00:09:59.924  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:09:59.924  "dma_device_type": 2
00:09:59.924  }
00:09:59.924  ],
00:09:59.924  "driver_specific": {}
00:09:59.924  }
00:09:59.924  ]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.924  BaseBdev3
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:09:59.924   11:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.184  [
00:10:00.184  {
00:10:00.184  "name": "BaseBdev3",
00:10:00.184  "aliases": [
00:10:00.184  "148c539e-916c-43ed-8c22-a472e8cc3b59"
00:10:00.184  ],
00:10:00.184  "product_name": "Malloc disk",
00:10:00.184  "block_size": 512,
00:10:00.184  "num_blocks": 65536,
00:10:00.184  "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:00.184  "assigned_rate_limits": {
00:10:00.184  "rw_ios_per_sec": 0,
00:10:00.184  "rw_mbytes_per_sec": 0,
00:10:00.184  "r_mbytes_per_sec": 0,
00:10:00.184  "w_mbytes_per_sec": 0
00:10:00.184  },
00:10:00.184  "claimed": false,
00:10:00.184  "zoned": false,
00:10:00.184  "supported_io_types": {
00:10:00.184  "read": true,
00:10:00.184  "write": true,
00:10:00.184  "unmap": true,
00:10:00.184  "flush": true,
00:10:00.184  "reset": true,
00:10:00.184  "nvme_admin": false,
00:10:00.184  "nvme_io": false,
00:10:00.184  "nvme_io_md": false,
00:10:00.184  "write_zeroes": true,
00:10:00.184  "zcopy": true,
00:10:00.184  "get_zone_info": false,
00:10:00.184  "zone_management": false,
00:10:00.184  "zone_append": false,
00:10:00.184  "compare": false,
00:10:00.184  "compare_and_write": false,
00:10:00.184  "abort": true,
00:10:00.184  "seek_hole": false,
00:10:00.184  "seek_data": false,
00:10:00.184  "copy": true,
00:10:00.184  "nvme_iov_md": false
00:10:00.184  },
00:10:00.184  "memory_domains": [
00:10:00.184  {
00:10:00.184  "dma_device_id": "system",
00:10:00.184  "dma_device_type": 1
00:10:00.184  },
00:10:00.184  {
00:10:00.184  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:00.184  "dma_device_type": 2
00:10:00.184  }
00:10:00.184  ],
00:10:00.184  "driver_specific": {}
00:10:00.184  }
00:10:00.184  ]
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.184  [2024-12-16 11:31:26.013980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:00.184  [2024-12-16 11:31:26.014037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:00.184  [2024-12-16 11:31:26.014062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:00.184  [2024-12-16 11:31:26.016209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:00.184    11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:00.184    11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:00.184    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.184    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.184    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.184   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:00.184    "name": "Existed_Raid",
00:10:00.184    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:00.184    "strip_size_kb": 64,
00:10:00.184    "state": "configuring",
00:10:00.184    "raid_level": "concat",
00:10:00.184    "superblock": false,
00:10:00.184    "num_base_bdevs": 3,
00:10:00.184    "num_base_bdevs_discovered": 2,
00:10:00.184    "num_base_bdevs_operational": 3,
00:10:00.184    "base_bdevs_list": [
00:10:00.184      {
00:10:00.184        "name": "BaseBdev1",
00:10:00.184        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:00.184        "is_configured": false,
00:10:00.184        "data_offset": 0,
00:10:00.184        "data_size": 0
00:10:00.184      },
00:10:00.184      {
00:10:00.184        "name": "BaseBdev2",
00:10:00.184        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:00.185        "is_configured": true,
00:10:00.185        "data_offset": 0,
00:10:00.185        "data_size": 65536
00:10:00.185      },
00:10:00.185      {
00:10:00.185        "name": "BaseBdev3",
00:10:00.185        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:00.185        "is_configured": true,
00:10:00.185        "data_offset": 0,
00:10:00.185        "data_size": 65536
00:10:00.185      }
00:10:00.185    ]
00:10:00.185  }'
00:10:00.185   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:00.185   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.444  [2024-12-16 11:31:26.485320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:00.444   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:00.444    11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:00.444    11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:00.444    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.444    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.703    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.703   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:00.703    "name": "Existed_Raid",
00:10:00.703    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:00.703    "strip_size_kb": 64,
00:10:00.703    "state": "configuring",
00:10:00.703    "raid_level": "concat",
00:10:00.703    "superblock": false,
00:10:00.703    "num_base_bdevs": 3,
00:10:00.703    "num_base_bdevs_discovered": 1,
00:10:00.703    "num_base_bdevs_operational": 3,
00:10:00.703    "base_bdevs_list": [
00:10:00.703      {
00:10:00.703        "name": "BaseBdev1",
00:10:00.703        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:00.703        "is_configured": false,
00:10:00.703        "data_offset": 0,
00:10:00.703        "data_size": 0
00:10:00.703      },
00:10:00.703      {
00:10:00.703        "name": null,
00:10:00.703        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:00.703        "is_configured": false,
00:10:00.703        "data_offset": 0,
00:10:00.703        "data_size": 65536
00:10:00.703      },
00:10:00.703      {
00:10:00.703        "name": "BaseBdev3",
00:10:00.703        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:00.703        "is_configured": true,
00:10:00.703        "data_offset": 0,
00:10:00.703        "data_size": 65536
00:10:00.703      }
00:10:00.703    ]
00:10:00.703  }'
00:10:00.703   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:00.703   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.963    11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:00.963    11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:00.963    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.963    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.963    11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.963   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:10:00.963   11:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:00.963   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.963   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.963  [2024-12-16 11:31:26.999853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:00.963  BaseBdev1
00:10:00.963   11:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:00.963   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:01.223  [
00:10:01.223  {
00:10:01.223  "name": "BaseBdev1",
00:10:01.223  "aliases": [
00:10:01.223  "473e360f-ede5-48e7-be3c-e6dfad766dfc"
00:10:01.223  ],
00:10:01.223  "product_name": "Malloc disk",
00:10:01.223  "block_size": 512,
00:10:01.223  "num_blocks": 65536,
00:10:01.223  "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:01.223  "assigned_rate_limits": {
00:10:01.223  "rw_ios_per_sec": 0,
00:10:01.223  "rw_mbytes_per_sec": 0,
00:10:01.223  "r_mbytes_per_sec": 0,
00:10:01.223  "w_mbytes_per_sec": 0
00:10:01.223  },
00:10:01.223  "claimed": true,
00:10:01.223  "claim_type": "exclusive_write",
00:10:01.223  "zoned": false,
00:10:01.223  "supported_io_types": {
00:10:01.223  "read": true,
00:10:01.223  "write": true,
00:10:01.223  "unmap": true,
00:10:01.223  "flush": true,
00:10:01.223  "reset": true,
00:10:01.223  "nvme_admin": false,
00:10:01.223  "nvme_io": false,
00:10:01.223  "nvme_io_md": false,
00:10:01.223  "write_zeroes": true,
00:10:01.223  "zcopy": true,
00:10:01.223  "get_zone_info": false,
00:10:01.223  "zone_management": false,
00:10:01.223  "zone_append": false,
00:10:01.223  "compare": false,
00:10:01.223  "compare_and_write": false,
00:10:01.223  "abort": true,
00:10:01.223  "seek_hole": false,
00:10:01.223  "seek_data": false,
00:10:01.223  "copy": true,
00:10:01.223  "nvme_iov_md": false
00:10:01.223  },
00:10:01.223  "memory_domains": [
00:10:01.223  {
00:10:01.223  "dma_device_id": "system",
00:10:01.223  "dma_device_type": 1
00:10:01.223  },
00:10:01.223  {
00:10:01.223  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:01.223  "dma_device_type": 2
00:10:01.223  }
00:10:01.223  ],
00:10:01.223  "driver_specific": {}
00:10:01.223  }
00:10:01.223  ]
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:01.223    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:01.223    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:01.223    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:01.223    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:01.223    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:01.223   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:01.223    "name": "Existed_Raid",
00:10:01.223    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:01.223    "strip_size_kb": 64,
00:10:01.223    "state": "configuring",
00:10:01.223    "raid_level": "concat",
00:10:01.223    "superblock": false,
00:10:01.223    "num_base_bdevs": 3,
00:10:01.223    "num_base_bdevs_discovered": 2,
00:10:01.223    "num_base_bdevs_operational": 3,
00:10:01.223    "base_bdevs_list": [
00:10:01.224      {
00:10:01.224        "name": "BaseBdev1",
00:10:01.224        "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:01.224        "is_configured": true,
00:10:01.224        "data_offset": 0,
00:10:01.224        "data_size": 65536
00:10:01.224      },
00:10:01.224      {
00:10:01.224        "name": null,
00:10:01.224        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:01.224        "is_configured": false,
00:10:01.224        "data_offset": 0,
00:10:01.224        "data_size": 65536
00:10:01.224      },
00:10:01.224      {
00:10:01.224        "name": "BaseBdev3",
00:10:01.224        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:01.224        "is_configured": true,
00:10:01.224        "data_offset": 0,
00:10:01.224        "data_size": 65536
00:10:01.224      }
00:10:01.224    ]
00:10:01.224  }'
00:10:01.224   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:01.224   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:01.482    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:01.482    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:01.482    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:01.482    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:01.482    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:01.482   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:10:01.482   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:10:01.482   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:01.482   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:01.742  [2024-12-16 11:31:27.551016] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:01.742    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:01.742    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:01.742    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:01.742    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:01.742    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:01.742    "name": "Existed_Raid",
00:10:01.742    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:01.742    "strip_size_kb": 64,
00:10:01.742    "state": "configuring",
00:10:01.742    "raid_level": "concat",
00:10:01.742    "superblock": false,
00:10:01.742    "num_base_bdevs": 3,
00:10:01.742    "num_base_bdevs_discovered": 1,
00:10:01.742    "num_base_bdevs_operational": 3,
00:10:01.742    "base_bdevs_list": [
00:10:01.742      {
00:10:01.742        "name": "BaseBdev1",
00:10:01.742        "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:01.742        "is_configured": true,
00:10:01.742        "data_offset": 0,
00:10:01.742        "data_size": 65536
00:10:01.742      },
00:10:01.742      {
00:10:01.742        "name": null,
00:10:01.742        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:01.742        "is_configured": false,
00:10:01.742        "data_offset": 0,
00:10:01.742        "data_size": 65536
00:10:01.742      },
00:10:01.742      {
00:10:01.742        "name": null,
00:10:01.742        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:01.742        "is_configured": false,
00:10:01.742        "data_offset": 0,
00:10:01.742        "data_size": 65536
00:10:01.742      }
00:10:01.742    ]
00:10:01.742  }'
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:01.742   11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.001    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:02.001    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:02.001    11:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.001    11:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:02.001    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.001  [2024-12-16 11:31:28.050219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:02.001   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:02.002   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:02.002   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:02.002    11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:02.002    11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:02.002    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:02.002    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.260    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:02.260   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:02.260    "name": "Existed_Raid",
00:10:02.260    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:02.260    "strip_size_kb": 64,
00:10:02.260    "state": "configuring",
00:10:02.260    "raid_level": "concat",
00:10:02.260    "superblock": false,
00:10:02.260    "num_base_bdevs": 3,
00:10:02.260    "num_base_bdevs_discovered": 2,
00:10:02.260    "num_base_bdevs_operational": 3,
00:10:02.260    "base_bdevs_list": [
00:10:02.260      {
00:10:02.260        "name": "BaseBdev1",
00:10:02.260        "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:02.260        "is_configured": true,
00:10:02.260        "data_offset": 0,
00:10:02.260        "data_size": 65536
00:10:02.260      },
00:10:02.260      {
00:10:02.260        "name": null,
00:10:02.260        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:02.260        "is_configured": false,
00:10:02.260        "data_offset": 0,
00:10:02.260        "data_size": 65536
00:10:02.260      },
00:10:02.260      {
00:10:02.260        "name": "BaseBdev3",
00:10:02.260        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:02.260        "is_configured": true,
00:10:02.260        "data_offset": 0,
00:10:02.260        "data_size": 65536
00:10:02.260      }
00:10:02.260    ]
00:10:02.260  }'
00:10:02.260   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:02.260   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.519    11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:02.519    11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:02.519    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:02.519    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.519    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.519  [2024-12-16 11:31:28.569672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:02.519   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:02.778   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:02.778    11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:02.778    11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:02.778    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:02.778    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:02.778    11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:02.778   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:02.778    "name": "Existed_Raid",
00:10:02.778    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:02.778    "strip_size_kb": 64,
00:10:02.778    "state": "configuring",
00:10:02.778    "raid_level": "concat",
00:10:02.778    "superblock": false,
00:10:02.778    "num_base_bdevs": 3,
00:10:02.778    "num_base_bdevs_discovered": 1,
00:10:02.778    "num_base_bdevs_operational": 3,
00:10:02.778    "base_bdevs_list": [
00:10:02.778      {
00:10:02.778        "name": null,
00:10:02.778        "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:02.778        "is_configured": false,
00:10:02.778        "data_offset": 0,
00:10:02.778        "data_size": 65536
00:10:02.778      },
00:10:02.778      {
00:10:02.778        "name": null,
00:10:02.778        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:02.778        "is_configured": false,
00:10:02.778        "data_offset": 0,
00:10:02.778        "data_size": 65536
00:10:02.778      },
00:10:02.778      {
00:10:02.778        "name": "BaseBdev3",
00:10:02.778        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:02.778        "is_configured": true,
00:10:02.778        "data_offset": 0,
00:10:02.778        "data_size": 65536
00:10:02.778      }
00:10:02.778    ]
00:10:02.778  }'
00:10:02.778   11:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:02.778   11:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.037  [2024-12-16 11:31:29.079737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:03.037   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.037    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.296    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.296   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:03.296    "name": "Existed_Raid",
00:10:03.296    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:03.296    "strip_size_kb": 64,
00:10:03.296    "state": "configuring",
00:10:03.296    "raid_level": "concat",
00:10:03.296    "superblock": false,
00:10:03.296    "num_base_bdevs": 3,
00:10:03.296    "num_base_bdevs_discovered": 2,
00:10:03.296    "num_base_bdevs_operational": 3,
00:10:03.296    "base_bdevs_list": [
00:10:03.296      {
00:10:03.296        "name": null,
00:10:03.296        "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:03.296        "is_configured": false,
00:10:03.296        "data_offset": 0,
00:10:03.296        "data_size": 65536
00:10:03.296      },
00:10:03.296      {
00:10:03.296        "name": "BaseBdev2",
00:10:03.297        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:03.297        "is_configured": true,
00:10:03.297        "data_offset": 0,
00:10:03.297        "data_size": 65536
00:10:03.297      },
00:10:03.297      {
00:10:03.297        "name": "BaseBdev3",
00:10:03.297        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:03.297        "is_configured": true,
00:10:03.297        "data_offset": 0,
00:10:03.297        "data_size": 65536
00:10:03.297      }
00:10:03.297    ]
00:10:03.297  }'
00:10:03.297   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:03.297   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.556   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.556    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.556   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 473e360f-ede5-48e7-be3c-e6dfad766dfc
00:10:03.556   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.556   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.816  [2024-12-16 11:31:29.630280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:10:03.816  [2024-12-16 11:31:29.630413] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:03.816  [2024-12-16 11:31:29.630447] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:10:03.816  [2024-12-16 11:31:29.630811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:10:03.816  [2024-12-16 11:31:29.630996] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:03.816  [2024-12-16 11:31:29.631046] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:10:03.816  [2024-12-16 11:31:29.631313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:03.816  NewBaseBdev
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.816  [
00:10:03.816  {
00:10:03.816  "name": "NewBaseBdev",
00:10:03.816  "aliases": [
00:10:03.816  "473e360f-ede5-48e7-be3c-e6dfad766dfc"
00:10:03.816  ],
00:10:03.816  "product_name": "Malloc disk",
00:10:03.816  "block_size": 512,
00:10:03.816  "num_blocks": 65536,
00:10:03.816  "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:03.816  "assigned_rate_limits": {
00:10:03.816  "rw_ios_per_sec": 0,
00:10:03.816  "rw_mbytes_per_sec": 0,
00:10:03.816  "r_mbytes_per_sec": 0,
00:10:03.816  "w_mbytes_per_sec": 0
00:10:03.816  },
00:10:03.816  "claimed": true,
00:10:03.816  "claim_type": "exclusive_write",
00:10:03.816  "zoned": false,
00:10:03.816  "supported_io_types": {
00:10:03.816  "read": true,
00:10:03.816  "write": true,
00:10:03.816  "unmap": true,
00:10:03.816  "flush": true,
00:10:03.816  "reset": true,
00:10:03.816  "nvme_admin": false,
00:10:03.816  "nvme_io": false,
00:10:03.816  "nvme_io_md": false,
00:10:03.816  "write_zeroes": true,
00:10:03.816  "zcopy": true,
00:10:03.816  "get_zone_info": false,
00:10:03.816  "zone_management": false,
00:10:03.816  "zone_append": false,
00:10:03.816  "compare": false,
00:10:03.816  "compare_and_write": false,
00:10:03.816  "abort": true,
00:10:03.816  "seek_hole": false,
00:10:03.816  "seek_data": false,
00:10:03.816  "copy": true,
00:10:03.816  "nvme_iov_md": false
00:10:03.816  },
00:10:03.816  "memory_domains": [
00:10:03.816  {
00:10:03.816  "dma_device_id": "system",
00:10:03.816  "dma_device_type": 1
00:10:03.816  },
00:10:03.816  {
00:10:03.816  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:03.816  "dma_device_type": 2
00:10:03.816  }
00:10:03.816  ],
00:10:03.816  "driver_specific": {}
00:10:03.816  }
00:10:03.816  ]
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:03.816    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:03.816    11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:03.816    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:03.816    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:03.816    11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:03.816    "name": "Existed_Raid",
00:10:03.816    "uuid": "b856bdef-7bc9-42af-8a19-22936421658c",
00:10:03.816    "strip_size_kb": 64,
00:10:03.816    "state": "online",
00:10:03.816    "raid_level": "concat",
00:10:03.816    "superblock": false,
00:10:03.816    "num_base_bdevs": 3,
00:10:03.816    "num_base_bdevs_discovered": 3,
00:10:03.816    "num_base_bdevs_operational": 3,
00:10:03.816    "base_bdevs_list": [
00:10:03.816      {
00:10:03.816        "name": "NewBaseBdev",
00:10:03.816        "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:03.816        "is_configured": true,
00:10:03.816        "data_offset": 0,
00:10:03.816        "data_size": 65536
00:10:03.816      },
00:10:03.816      {
00:10:03.816        "name": "BaseBdev2",
00:10:03.816        "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:03.816        "is_configured": true,
00:10:03.816        "data_offset": 0,
00:10:03.816        "data_size": 65536
00:10:03.816      },
00:10:03.816      {
00:10:03.816        "name": "BaseBdev3",
00:10:03.816        "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:03.816        "is_configured": true,
00:10:03.816        "data_offset": 0,
00:10:03.816        "data_size": 65536
00:10:03.816      }
00:10:03.816    ]
00:10:03.816  }'
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:03.816   11:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:04.076   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:10:04.076   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:10:04.076   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:04.076   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:04.076   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:10:04.076   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:04.076    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:10:04.076    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:04.076    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:04.076    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:04.076  [2024-12-16 11:31:30.130045] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:04.335    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:04.335   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:04.335    "name": "Existed_Raid",
00:10:04.335    "aliases": [
00:10:04.335      "b856bdef-7bc9-42af-8a19-22936421658c"
00:10:04.335    ],
00:10:04.335    "product_name": "Raid Volume",
00:10:04.335    "block_size": 512,
00:10:04.335    "num_blocks": 196608,
00:10:04.335    "uuid": "b856bdef-7bc9-42af-8a19-22936421658c",
00:10:04.335    "assigned_rate_limits": {
00:10:04.335      "rw_ios_per_sec": 0,
00:10:04.335      "rw_mbytes_per_sec": 0,
00:10:04.335      "r_mbytes_per_sec": 0,
00:10:04.335      "w_mbytes_per_sec": 0
00:10:04.335    },
00:10:04.335    "claimed": false,
00:10:04.335    "zoned": false,
00:10:04.335    "supported_io_types": {
00:10:04.335      "read": true,
00:10:04.335      "write": true,
00:10:04.335      "unmap": true,
00:10:04.335      "flush": true,
00:10:04.335      "reset": true,
00:10:04.335      "nvme_admin": false,
00:10:04.335      "nvme_io": false,
00:10:04.335      "nvme_io_md": false,
00:10:04.335      "write_zeroes": true,
00:10:04.335      "zcopy": false,
00:10:04.335      "get_zone_info": false,
00:10:04.335      "zone_management": false,
00:10:04.335      "zone_append": false,
00:10:04.335      "compare": false,
00:10:04.335      "compare_and_write": false,
00:10:04.335      "abort": false,
00:10:04.335      "seek_hole": false,
00:10:04.335      "seek_data": false,
00:10:04.335      "copy": false,
00:10:04.335      "nvme_iov_md": false
00:10:04.335    },
00:10:04.335    "memory_domains": [
00:10:04.335      {
00:10:04.335        "dma_device_id": "system",
00:10:04.335        "dma_device_type": 1
00:10:04.335      },
00:10:04.335      {
00:10:04.335        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:04.335        "dma_device_type": 2
00:10:04.335      },
00:10:04.335      {
00:10:04.335        "dma_device_id": "system",
00:10:04.335        "dma_device_type": 1
00:10:04.335      },
00:10:04.335      {
00:10:04.335        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:04.335        "dma_device_type": 2
00:10:04.335      },
00:10:04.335      {
00:10:04.335        "dma_device_id": "system",
00:10:04.335        "dma_device_type": 1
00:10:04.335      },
00:10:04.335      {
00:10:04.335        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:04.335        "dma_device_type": 2
00:10:04.335      }
00:10:04.335    ],
00:10:04.335    "driver_specific": {
00:10:04.335      "raid": {
00:10:04.335        "uuid": "b856bdef-7bc9-42af-8a19-22936421658c",
00:10:04.335        "strip_size_kb": 64,
00:10:04.335        "state": "online",
00:10:04.335        "raid_level": "concat",
00:10:04.335        "superblock": false,
00:10:04.335        "num_base_bdevs": 3,
00:10:04.335        "num_base_bdevs_discovered": 3,
00:10:04.335        "num_base_bdevs_operational": 3,
00:10:04.335        "base_bdevs_list": [
00:10:04.335          {
00:10:04.335            "name": "NewBaseBdev",
00:10:04.335            "uuid": "473e360f-ede5-48e7-be3c-e6dfad766dfc",
00:10:04.335            "is_configured": true,
00:10:04.335            "data_offset": 0,
00:10:04.335            "data_size": 65536
00:10:04.335          },
00:10:04.335          {
00:10:04.335            "name": "BaseBdev2",
00:10:04.335            "uuid": "a260e037-708f-4eb6-8b43-fb8070d1f339",
00:10:04.335            "is_configured": true,
00:10:04.335            "data_offset": 0,
00:10:04.335            "data_size": 65536
00:10:04.335          },
00:10:04.335          {
00:10:04.335            "name": "BaseBdev3",
00:10:04.335            "uuid": "148c539e-916c-43ed-8c22-a472e8cc3b59",
00:10:04.335            "is_configured": true,
00:10:04.335            "data_offset": 0,
00:10:04.335            "data_size": 65536
00:10:04.335          }
00:10:04.335        ]
00:10:04.335      }
00:10:04.335    }
00:10:04.336  }'
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:10:04.336  BaseBdev2
00:10:04.336  BaseBdev3'
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:04.336    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:04.336   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:04.336  [2024-12-16 11:31:30.397271] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:04.336  [2024-12-16 11:31:30.397353] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:04.336  [2024-12-16 11:31:30.397472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:04.336  [2024-12-16 11:31:30.397585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:04.336  [2024-12-16 11:31:30.397658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77031
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77031 ']'
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77031
00:10:04.595    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:04.595    11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77031
00:10:04.595  killing process with pid 77031
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77031'
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 77031
00:10:04.595  [2024-12-16 11:31:30.439622] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:04.595   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 77031
00:10:04.595  [2024-12-16 11:31:30.472357] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:10:04.856  
00:10:04.856  real	0m9.480s
00:10:04.856  user	0m16.167s
00:10:04.856  sys	0m2.027s
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:04.856  ************************************
00:10:04.856  END TEST raid_state_function_test
00:10:04.856  ************************************
00:10:04.856   11:31:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true
00:10:04.856   11:31:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:04.856   11:31:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:04.856   11:31:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:04.856  ************************************
00:10:04.856  START TEST raid_state_function_test_sb
00:10:04.856  ************************************
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:04.856    11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']'
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77641
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77641'
00:10:04.856  Process raid pid: 77641
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77641
00:10:04.856  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77641 ']'
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:04.856   11:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:04.856  [2024-12-16 11:31:30.893941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:04.856  [2024-12-16 11:31:30.894104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:05.116  [2024-12-16 11:31:31.068010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.116  [2024-12-16 11:31:31.125801] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:05.116  [2024-12-16 11:31:31.175346] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:05.116  [2024-12-16 11:31:31.175388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.056  [2024-12-16 11:31:31.796784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:06.056  [2024-12-16 11:31:31.796859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:06.056  [2024-12-16 11:31:31.796886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:06.056  [2024-12-16 11:31:31.796900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:06.056  [2024-12-16 11:31:31.796907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:06.056  [2024-12-16 11:31:31.796922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:06.056    11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:06.056    11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.056    11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.056    11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:06.056    11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.056   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:06.056    "name": "Existed_Raid",
00:10:06.056    "uuid": "cd0722fc-d1e1-4a90-9ac2-aa0b369e22cc",
00:10:06.056    "strip_size_kb": 64,
00:10:06.056    "state": "configuring",
00:10:06.056    "raid_level": "concat",
00:10:06.056    "superblock": true,
00:10:06.056    "num_base_bdevs": 3,
00:10:06.056    "num_base_bdevs_discovered": 0,
00:10:06.056    "num_base_bdevs_operational": 3,
00:10:06.056    "base_bdevs_list": [
00:10:06.056      {
00:10:06.056        "name": "BaseBdev1",
00:10:06.056        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:06.056        "is_configured": false,
00:10:06.056        "data_offset": 0,
00:10:06.057        "data_size": 0
00:10:06.057      },
00:10:06.057      {
00:10:06.057        "name": "BaseBdev2",
00:10:06.057        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:06.057        "is_configured": false,
00:10:06.057        "data_offset": 0,
00:10:06.057        "data_size": 0
00:10:06.057      },
00:10:06.057      {
00:10:06.057        "name": "BaseBdev3",
00:10:06.057        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:06.057        "is_configured": false,
00:10:06.057        "data_offset": 0,
00:10:06.057        "data_size": 0
00:10:06.057      }
00:10:06.057    ]
00:10:06.057  }'
00:10:06.057   11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:06.057   11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.317  [2024-12-16 11:31:32.268715] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:06.317  [2024-12-16 11:31:32.268838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.317  [2024-12-16 11:31:32.280735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:06.317  [2024-12-16 11:31:32.280836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:06.317  [2024-12-16 11:31:32.280896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:06.317  [2024-12-16 11:31:32.280943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:06.317  [2024-12-16 11:31:32.280982] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:06.317  [2024-12-16 11:31:32.281011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.317  [2024-12-16 11:31:32.303246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:06.317  BaseBdev1
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.317  [
00:10:06.317  {
00:10:06.317  "name": "BaseBdev1",
00:10:06.317  "aliases": [
00:10:06.317  "577f574d-443c-4395-84d5-e70c0730f973"
00:10:06.317  ],
00:10:06.317  "product_name": "Malloc disk",
00:10:06.317  "block_size": 512,
00:10:06.317  "num_blocks": 65536,
00:10:06.317  "uuid": "577f574d-443c-4395-84d5-e70c0730f973",
00:10:06.317  "assigned_rate_limits": {
00:10:06.317  "rw_ios_per_sec": 0,
00:10:06.317  "rw_mbytes_per_sec": 0,
00:10:06.317  "r_mbytes_per_sec": 0,
00:10:06.317  "w_mbytes_per_sec": 0
00:10:06.317  },
00:10:06.317  "claimed": true,
00:10:06.317  "claim_type": "exclusive_write",
00:10:06.317  "zoned": false,
00:10:06.317  "supported_io_types": {
00:10:06.317  "read": true,
00:10:06.317  "write": true,
00:10:06.317  "unmap": true,
00:10:06.317  "flush": true,
00:10:06.317  "reset": true,
00:10:06.317  "nvme_admin": false,
00:10:06.317  "nvme_io": false,
00:10:06.317  "nvme_io_md": false,
00:10:06.317  "write_zeroes": true,
00:10:06.317  "zcopy": true,
00:10:06.317  "get_zone_info": false,
00:10:06.317  "zone_management": false,
00:10:06.317  "zone_append": false,
00:10:06.317  "compare": false,
00:10:06.317  "compare_and_write": false,
00:10:06.317  "abort": true,
00:10:06.317  "seek_hole": false,
00:10:06.317  "seek_data": false,
00:10:06.317  "copy": true,
00:10:06.317  "nvme_iov_md": false
00:10:06.317  },
00:10:06.317  "memory_domains": [
00:10:06.317  {
00:10:06.317  "dma_device_id": "system",
00:10:06.317  "dma_device_type": 1
00:10:06.317  },
00:10:06.317  {
00:10:06.317  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:06.317  "dma_device_type": 2
00:10:06.317  }
00:10:06.317  ],
00:10:06.317  "driver_specific": {}
00:10:06.317  }
00:10:06.317  ]
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:06.317   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:06.317    11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:06.317    11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:06.317    11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.317    11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.317    11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.576   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:06.576    "name": "Existed_Raid",
00:10:06.576    "uuid": "bf2fcedd-eeb3-4bfd-97bf-256ce1412118",
00:10:06.576    "strip_size_kb": 64,
00:10:06.576    "state": "configuring",
00:10:06.576    "raid_level": "concat",
00:10:06.576    "superblock": true,
00:10:06.576    "num_base_bdevs": 3,
00:10:06.576    "num_base_bdevs_discovered": 1,
00:10:06.576    "num_base_bdevs_operational": 3,
00:10:06.576    "base_bdevs_list": [
00:10:06.576      {
00:10:06.576        "name": "BaseBdev1",
00:10:06.576        "uuid": "577f574d-443c-4395-84d5-e70c0730f973",
00:10:06.576        "is_configured": true,
00:10:06.576        "data_offset": 2048,
00:10:06.576        "data_size": 63488
00:10:06.576      },
00:10:06.576      {
00:10:06.576        "name": "BaseBdev2",
00:10:06.576        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:06.576        "is_configured": false,
00:10:06.576        "data_offset": 0,
00:10:06.576        "data_size": 0
00:10:06.576      },
00:10:06.576      {
00:10:06.576        "name": "BaseBdev3",
00:10:06.576        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:06.576        "is_configured": false,
00:10:06.576        "data_offset": 0,
00:10:06.576        "data_size": 0
00:10:06.576      }
00:10:06.576    ]
00:10:06.576  }'
00:10:06.576   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:06.576   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.836  [2024-12-16 11:31:32.822637] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:06.836  [2024-12-16 11:31:32.822773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.836  [2024-12-16 11:31:32.834679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:06.836  [2024-12-16 11:31:32.837015] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:06.836  [2024-12-16 11:31:32.837071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:06.836  [2024-12-16 11:31:32.837084] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:06.836  [2024-12-16 11:31:32.837097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:06.836    11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:06.836    11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:06.836    11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:06.836    11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:06.836    11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:06.836    "name": "Existed_Raid",
00:10:06.836    "uuid": "62bbe1b6-ec4e-43a1-a0d3-869b07ff0521",
00:10:06.836    "strip_size_kb": 64,
00:10:06.836    "state": "configuring",
00:10:06.836    "raid_level": "concat",
00:10:06.836    "superblock": true,
00:10:06.836    "num_base_bdevs": 3,
00:10:06.836    "num_base_bdevs_discovered": 1,
00:10:06.836    "num_base_bdevs_operational": 3,
00:10:06.836    "base_bdevs_list": [
00:10:06.836      {
00:10:06.836        "name": "BaseBdev1",
00:10:06.836        "uuid": "577f574d-443c-4395-84d5-e70c0730f973",
00:10:06.836        "is_configured": true,
00:10:06.836        "data_offset": 2048,
00:10:06.836        "data_size": 63488
00:10:06.836      },
00:10:06.836      {
00:10:06.836        "name": "BaseBdev2",
00:10:06.836        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:06.836        "is_configured": false,
00:10:06.836        "data_offset": 0,
00:10:06.836        "data_size": 0
00:10:06.836      },
00:10:06.836      {
00:10:06.836        "name": "BaseBdev3",
00:10:06.836        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:06.836        "is_configured": false,
00:10:06.836        "data_offset": 0,
00:10:06.836        "data_size": 0
00:10:06.836      }
00:10:06.836    ]
00:10:06.836  }'
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:06.836   11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.407  [2024-12-16 11:31:33.341776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:07.407  BaseBdev2
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.407  [
00:10:07.407  {
00:10:07.407  "name": "BaseBdev2",
00:10:07.407  "aliases": [
00:10:07.407  "756545a4-3bb1-4dc0-8366-86f77085b5db"
00:10:07.407  ],
00:10:07.407  "product_name": "Malloc disk",
00:10:07.407  "block_size": 512,
00:10:07.407  "num_blocks": 65536,
00:10:07.407  "uuid": "756545a4-3bb1-4dc0-8366-86f77085b5db",
00:10:07.407  "assigned_rate_limits": {
00:10:07.407  "rw_ios_per_sec": 0,
00:10:07.407  "rw_mbytes_per_sec": 0,
00:10:07.407  "r_mbytes_per_sec": 0,
00:10:07.407  "w_mbytes_per_sec": 0
00:10:07.407  },
00:10:07.407  "claimed": true,
00:10:07.407  "claim_type": "exclusive_write",
00:10:07.407  "zoned": false,
00:10:07.407  "supported_io_types": {
00:10:07.407  "read": true,
00:10:07.407  "write": true,
00:10:07.407  "unmap": true,
00:10:07.407  "flush": true,
00:10:07.407  "reset": true,
00:10:07.407  "nvme_admin": false,
00:10:07.407  "nvme_io": false,
00:10:07.407  "nvme_io_md": false,
00:10:07.407  "write_zeroes": true,
00:10:07.407  "zcopy": true,
00:10:07.407  "get_zone_info": false,
00:10:07.407  "zone_management": false,
00:10:07.407  "zone_append": false,
00:10:07.407  "compare": false,
00:10:07.407  "compare_and_write": false,
00:10:07.407  "abort": true,
00:10:07.407  "seek_hole": false,
00:10:07.407  "seek_data": false,
00:10:07.407  "copy": true,
00:10:07.407  "nvme_iov_md": false
00:10:07.407  },
00:10:07.407  "memory_domains": [
00:10:07.407  {
00:10:07.407  "dma_device_id": "system",
00:10:07.407  "dma_device_type": 1
00:10:07.407  },
00:10:07.407  {
00:10:07.407  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:07.407  "dma_device_type": 2
00:10:07.407  }
00:10:07.407  ],
00:10:07.407  "driver_specific": {}
00:10:07.407  }
00:10:07.407  ]
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:07.407    11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:07.407    11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:07.407    11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.407    11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.407    11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.407   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:07.407    "name": "Existed_Raid",
00:10:07.407    "uuid": "62bbe1b6-ec4e-43a1-a0d3-869b07ff0521",
00:10:07.407    "strip_size_kb": 64,
00:10:07.407    "state": "configuring",
00:10:07.407    "raid_level": "concat",
00:10:07.407    "superblock": true,
00:10:07.407    "num_base_bdevs": 3,
00:10:07.407    "num_base_bdevs_discovered": 2,
00:10:07.407    "num_base_bdevs_operational": 3,
00:10:07.407    "base_bdevs_list": [
00:10:07.407      {
00:10:07.407        "name": "BaseBdev1",
00:10:07.407        "uuid": "577f574d-443c-4395-84d5-e70c0730f973",
00:10:07.408        "is_configured": true,
00:10:07.408        "data_offset": 2048,
00:10:07.408        "data_size": 63488
00:10:07.408      },
00:10:07.408      {
00:10:07.408        "name": "BaseBdev2",
00:10:07.408        "uuid": "756545a4-3bb1-4dc0-8366-86f77085b5db",
00:10:07.408        "is_configured": true,
00:10:07.408        "data_offset": 2048,
00:10:07.408        "data_size": 63488
00:10:07.408      },
00:10:07.408      {
00:10:07.408        "name": "BaseBdev3",
00:10:07.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:07.408        "is_configured": false,
00:10:07.408        "data_offset": 0,
00:10:07.408        "data_size": 0
00:10:07.408      }
00:10:07.408    ]
00:10:07.408  }'
00:10:07.408   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:07.408   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.979  [2024-12-16 11:31:33.845913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:07.979  [2024-12-16 11:31:33.846178] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:10:07.979  [2024-12-16 11:31:33.846202] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:10:07.979  [2024-12-16 11:31:33.846591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:07.979  BaseBdev3
00:10:07.979  [2024-12-16 11:31:33.846759] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:10:07.979  [2024-12-16 11:31:33.846784] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:10:07.979  [2024-12-16 11:31:33.846938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.979  [
00:10:07.979  {
00:10:07.979  "name": "BaseBdev3",
00:10:07.979  "aliases": [
00:10:07.979  "8aa1b0f0-c87b-4276-b6b1-e577c6ac0283"
00:10:07.979  ],
00:10:07.979  "product_name": "Malloc disk",
00:10:07.979  "block_size": 512,
00:10:07.979  "num_blocks": 65536,
00:10:07.979  "uuid": "8aa1b0f0-c87b-4276-b6b1-e577c6ac0283",
00:10:07.979  "assigned_rate_limits": {
00:10:07.979  "rw_ios_per_sec": 0,
00:10:07.979  "rw_mbytes_per_sec": 0,
00:10:07.979  "r_mbytes_per_sec": 0,
00:10:07.979  "w_mbytes_per_sec": 0
00:10:07.979  },
00:10:07.979  "claimed": true,
00:10:07.979  "claim_type": "exclusive_write",
00:10:07.979  "zoned": false,
00:10:07.979  "supported_io_types": {
00:10:07.979  "read": true,
00:10:07.979  "write": true,
00:10:07.979  "unmap": true,
00:10:07.979  "flush": true,
00:10:07.979  "reset": true,
00:10:07.979  "nvme_admin": false,
00:10:07.979  "nvme_io": false,
00:10:07.979  "nvme_io_md": false,
00:10:07.979  "write_zeroes": true,
00:10:07.979  "zcopy": true,
00:10:07.979  "get_zone_info": false,
00:10:07.979  "zone_management": false,
00:10:07.979  "zone_append": false,
00:10:07.979  "compare": false,
00:10:07.979  "compare_and_write": false,
00:10:07.979  "abort": true,
00:10:07.979  "seek_hole": false,
00:10:07.979  "seek_data": false,
00:10:07.979  "copy": true,
00:10:07.979  "nvme_iov_md": false
00:10:07.979  },
00:10:07.979  "memory_domains": [
00:10:07.979  {
00:10:07.979  "dma_device_id": "system",
00:10:07.979  "dma_device_type": 1
00:10:07.979  },
00:10:07.979  {
00:10:07.979  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:07.979  "dma_device_type": 2
00:10:07.979  }
00:10:07.979  ],
00:10:07.979  "driver_specific": {}
00:10:07.979  }
00:10:07.979  ]
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:07.979    11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:07.979    11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:07.979    11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:07.979    11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:07.979    11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:07.979    "name": "Existed_Raid",
00:10:07.979    "uuid": "62bbe1b6-ec4e-43a1-a0d3-869b07ff0521",
00:10:07.979    "strip_size_kb": 64,
00:10:07.979    "state": "online",
00:10:07.979    "raid_level": "concat",
00:10:07.979    "superblock": true,
00:10:07.979    "num_base_bdevs": 3,
00:10:07.979    "num_base_bdevs_discovered": 3,
00:10:07.979    "num_base_bdevs_operational": 3,
00:10:07.979    "base_bdevs_list": [
00:10:07.979      {
00:10:07.979        "name": "BaseBdev1",
00:10:07.979        "uuid": "577f574d-443c-4395-84d5-e70c0730f973",
00:10:07.979        "is_configured": true,
00:10:07.979        "data_offset": 2048,
00:10:07.979        "data_size": 63488
00:10:07.979      },
00:10:07.979      {
00:10:07.979        "name": "BaseBdev2",
00:10:07.979        "uuid": "756545a4-3bb1-4dc0-8366-86f77085b5db",
00:10:07.979        "is_configured": true,
00:10:07.979        "data_offset": 2048,
00:10:07.979        "data_size": 63488
00:10:07.979      },
00:10:07.979      {
00:10:07.979        "name": "BaseBdev3",
00:10:07.979        "uuid": "8aa1b0f0-c87b-4276-b6b1-e577c6ac0283",
00:10:07.979        "is_configured": true,
00:10:07.979        "data_offset": 2048,
00:10:07.979        "data_size": 63488
00:10:07.979      }
00:10:07.979    ]
00:10:07.979  }'
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:07.979   11:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:08.548   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:08.549  [2024-12-16 11:31:34.350053] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:08.549    "name": "Existed_Raid",
00:10:08.549    "aliases": [
00:10:08.549      "62bbe1b6-ec4e-43a1-a0d3-869b07ff0521"
00:10:08.549    ],
00:10:08.549    "product_name": "Raid Volume",
00:10:08.549    "block_size": 512,
00:10:08.549    "num_blocks": 190464,
00:10:08.549    "uuid": "62bbe1b6-ec4e-43a1-a0d3-869b07ff0521",
00:10:08.549    "assigned_rate_limits": {
00:10:08.549      "rw_ios_per_sec": 0,
00:10:08.549      "rw_mbytes_per_sec": 0,
00:10:08.549      "r_mbytes_per_sec": 0,
00:10:08.549      "w_mbytes_per_sec": 0
00:10:08.549    },
00:10:08.549    "claimed": false,
00:10:08.549    "zoned": false,
00:10:08.549    "supported_io_types": {
00:10:08.549      "read": true,
00:10:08.549      "write": true,
00:10:08.549      "unmap": true,
00:10:08.549      "flush": true,
00:10:08.549      "reset": true,
00:10:08.549      "nvme_admin": false,
00:10:08.549      "nvme_io": false,
00:10:08.549      "nvme_io_md": false,
00:10:08.549      "write_zeroes": true,
00:10:08.549      "zcopy": false,
00:10:08.549      "get_zone_info": false,
00:10:08.549      "zone_management": false,
00:10:08.549      "zone_append": false,
00:10:08.549      "compare": false,
00:10:08.549      "compare_and_write": false,
00:10:08.549      "abort": false,
00:10:08.549      "seek_hole": false,
00:10:08.549      "seek_data": false,
00:10:08.549      "copy": false,
00:10:08.549      "nvme_iov_md": false
00:10:08.549    },
00:10:08.549    "memory_domains": [
00:10:08.549      {
00:10:08.549        "dma_device_id": "system",
00:10:08.549        "dma_device_type": 1
00:10:08.549      },
00:10:08.549      {
00:10:08.549        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:08.549        "dma_device_type": 2
00:10:08.549      },
00:10:08.549      {
00:10:08.549        "dma_device_id": "system",
00:10:08.549        "dma_device_type": 1
00:10:08.549      },
00:10:08.549      {
00:10:08.549        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:08.549        "dma_device_type": 2
00:10:08.549      },
00:10:08.549      {
00:10:08.549        "dma_device_id": "system",
00:10:08.549        "dma_device_type": 1
00:10:08.549      },
00:10:08.549      {
00:10:08.549        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:08.549        "dma_device_type": 2
00:10:08.549      }
00:10:08.549    ],
00:10:08.549    "driver_specific": {
00:10:08.549      "raid": {
00:10:08.549        "uuid": "62bbe1b6-ec4e-43a1-a0d3-869b07ff0521",
00:10:08.549        "strip_size_kb": 64,
00:10:08.549        "state": "online",
00:10:08.549        "raid_level": "concat",
00:10:08.549        "superblock": true,
00:10:08.549        "num_base_bdevs": 3,
00:10:08.549        "num_base_bdevs_discovered": 3,
00:10:08.549        "num_base_bdevs_operational": 3,
00:10:08.549        "base_bdevs_list": [
00:10:08.549          {
00:10:08.549            "name": "BaseBdev1",
00:10:08.549            "uuid": "577f574d-443c-4395-84d5-e70c0730f973",
00:10:08.549            "is_configured": true,
00:10:08.549            "data_offset": 2048,
00:10:08.549            "data_size": 63488
00:10:08.549          },
00:10:08.549          {
00:10:08.549            "name": "BaseBdev2",
00:10:08.549            "uuid": "756545a4-3bb1-4dc0-8366-86f77085b5db",
00:10:08.549            "is_configured": true,
00:10:08.549            "data_offset": 2048,
00:10:08.549            "data_size": 63488
00:10:08.549          },
00:10:08.549          {
00:10:08.549            "name": "BaseBdev3",
00:10:08.549            "uuid": "8aa1b0f0-c87b-4276-b6b1-e577c6ac0283",
00:10:08.549            "is_configured": true,
00:10:08.549            "data_offset": 2048,
00:10:08.549            "data_size": 63488
00:10:08.549          }
00:10:08.549        ]
00:10:08.549      }
00:10:08.549    }
00:10:08.549  }'
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:10:08.549  BaseBdev2
00:10:08.549  BaseBdev3'
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:08.549   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:08.549    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:08.809    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:08.809  [2024-12-16 11:31:34.649274] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:08.809  [2024-12-16 11:31:34.649410] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:08.809  [2024-12-16 11:31:34.649520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:08.809    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:08.809    11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:08.809    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:08.809    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:08.809    11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:08.809    "name": "Existed_Raid",
00:10:08.809    "uuid": "62bbe1b6-ec4e-43a1-a0d3-869b07ff0521",
00:10:08.809    "strip_size_kb": 64,
00:10:08.809    "state": "offline",
00:10:08.809    "raid_level": "concat",
00:10:08.809    "superblock": true,
00:10:08.809    "num_base_bdevs": 3,
00:10:08.809    "num_base_bdevs_discovered": 2,
00:10:08.809    "num_base_bdevs_operational": 2,
00:10:08.809    "base_bdevs_list": [
00:10:08.809      {
00:10:08.809        "name": null,
00:10:08.809        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:08.809        "is_configured": false,
00:10:08.809        "data_offset": 0,
00:10:08.809        "data_size": 63488
00:10:08.809      },
00:10:08.809      {
00:10:08.809        "name": "BaseBdev2",
00:10:08.809        "uuid": "756545a4-3bb1-4dc0-8366-86f77085b5db",
00:10:08.809        "is_configured": true,
00:10:08.809        "data_offset": 2048,
00:10:08.809        "data_size": 63488
00:10:08.809      },
00:10:08.809      {
00:10:08.809        "name": "BaseBdev3",
00:10:08.809        "uuid": "8aa1b0f0-c87b-4276-b6b1-e577c6ac0283",
00:10:08.809        "is_configured": true,
00:10:08.809        "data_offset": 2048,
00:10:08.809        "data_size": 63488
00:10:08.809      }
00:10:08.809    ]
00:10:08.809  }'
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:08.809   11:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378  [2024-12-16 11:31:35.228773] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378  [2024-12-16 11:31:35.304615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:10:09.378  [2024-12-16 11:31:35.304764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378  BaseBdev2
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378  [
00:10:09.378  {
00:10:09.378  "name": "BaseBdev2",
00:10:09.378  "aliases": [
00:10:09.378  "5fff9346-d757-4034-ba49-81c06d283919"
00:10:09.378  ],
00:10:09.378  "product_name": "Malloc disk",
00:10:09.378  "block_size": 512,
00:10:09.378  "num_blocks": 65536,
00:10:09.378  "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:09.378  "assigned_rate_limits": {
00:10:09.378  "rw_ios_per_sec": 0,
00:10:09.378  "rw_mbytes_per_sec": 0,
00:10:09.378  "r_mbytes_per_sec": 0,
00:10:09.378  "w_mbytes_per_sec": 0
00:10:09.378  },
00:10:09.378  "claimed": false,
00:10:09.378  "zoned": false,
00:10:09.378  "supported_io_types": {
00:10:09.378  "read": true,
00:10:09.378  "write": true,
00:10:09.378  "unmap": true,
00:10:09.378  "flush": true,
00:10:09.378  "reset": true,
00:10:09.378  "nvme_admin": false,
00:10:09.378  "nvme_io": false,
00:10:09.378  "nvme_io_md": false,
00:10:09.378  "write_zeroes": true,
00:10:09.378  "zcopy": true,
00:10:09.378  "get_zone_info": false,
00:10:09.378  "zone_management": false,
00:10:09.378  "zone_append": false,
00:10:09.378  "compare": false,
00:10:09.378  "compare_and_write": false,
00:10:09.378  "abort": true,
00:10:09.378  "seek_hole": false,
00:10:09.378  "seek_data": false,
00:10:09.378  "copy": true,
00:10:09.378  "nvme_iov_md": false
00:10:09.378  },
00:10:09.378  "memory_domains": [
00:10:09.378  {
00:10:09.378  "dma_device_id": "system",
00:10:09.378  "dma_device_type": 1
00:10:09.378  },
00:10:09.378  {
00:10:09.378  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:09.378  "dma_device_type": 2
00:10:09.378  }
00:10:09.378  ],
00:10:09.378  "driver_specific": {}
00:10:09.378  }
00:10:09.378  ]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.378  BaseBdev3
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:09.378   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.639  [
00:10:09.639  {
00:10:09.639  "name": "BaseBdev3",
00:10:09.639  "aliases": [
00:10:09.639  "9034dcd4-77e2-4d22-9748-187a2bc0528c"
00:10:09.639  ],
00:10:09.639  "product_name": "Malloc disk",
00:10:09.639  "block_size": 512,
00:10:09.639  "num_blocks": 65536,
00:10:09.639  "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:09.639  "assigned_rate_limits": {
00:10:09.639  "rw_ios_per_sec": 0,
00:10:09.639  "rw_mbytes_per_sec": 0,
00:10:09.639  "r_mbytes_per_sec": 0,
00:10:09.639  "w_mbytes_per_sec": 0
00:10:09.639  },
00:10:09.639  "claimed": false,
00:10:09.639  "zoned": false,
00:10:09.639  "supported_io_types": {
00:10:09.639  "read": true,
00:10:09.639  "write": true,
00:10:09.639  "unmap": true,
00:10:09.639  "flush": true,
00:10:09.639  "reset": true,
00:10:09.639  "nvme_admin": false,
00:10:09.639  "nvme_io": false,
00:10:09.639  "nvme_io_md": false,
00:10:09.639  "write_zeroes": true,
00:10:09.639  "zcopy": true,
00:10:09.639  "get_zone_info": false,
00:10:09.639  "zone_management": false,
00:10:09.639  "zone_append": false,
00:10:09.639  "compare": false,
00:10:09.639  "compare_and_write": false,
00:10:09.639  "abort": true,
00:10:09.639  "seek_hole": false,
00:10:09.639  "seek_data": false,
00:10:09.639  "copy": true,
00:10:09.639  "nvme_iov_md": false
00:10:09.639  },
00:10:09.639  "memory_domains": [
00:10:09.639  {
00:10:09.639  "dma_device_id": "system",
00:10:09.639  "dma_device_type": 1
00:10:09.639  },
00:10:09.639  {
00:10:09.639  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:09.639  "dma_device_type": 2
00:10:09.639  }
00:10:09.639  ],
00:10:09.639  "driver_specific": {}
00:10:09.639  }
00:10:09.639  ]
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.639  [2024-12-16 11:31:35.483496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:09.639  [2024-12-16 11:31:35.483671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:09.639  [2024-12-16 11:31:35.483727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:09.639  [2024-12-16 11:31:35.485888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:09.639    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:09.639    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:09.639    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:09.639    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:09.639    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:09.639    "name": "Existed_Raid",
00:10:09.639    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:09.639    "strip_size_kb": 64,
00:10:09.639    "state": "configuring",
00:10:09.639    "raid_level": "concat",
00:10:09.639    "superblock": true,
00:10:09.639    "num_base_bdevs": 3,
00:10:09.639    "num_base_bdevs_discovered": 2,
00:10:09.639    "num_base_bdevs_operational": 3,
00:10:09.639    "base_bdevs_list": [
00:10:09.639      {
00:10:09.639        "name": "BaseBdev1",
00:10:09.639        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:09.639        "is_configured": false,
00:10:09.639        "data_offset": 0,
00:10:09.639        "data_size": 0
00:10:09.639      },
00:10:09.639      {
00:10:09.639        "name": "BaseBdev2",
00:10:09.639        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:09.639        "is_configured": true,
00:10:09.639        "data_offset": 2048,
00:10:09.639        "data_size": 63488
00:10:09.639      },
00:10:09.639      {
00:10:09.639        "name": "BaseBdev3",
00:10:09.639        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:09.639        "is_configured": true,
00:10:09.639        "data_offset": 2048,
00:10:09.639        "data_size": 63488
00:10:09.639      }
00:10:09.639    ]
00:10:09.639  }'
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:09.639   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.208  [2024-12-16 11:31:35.970753] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:10.208   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:10.209   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:10.209   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:10.209   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:10.209   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:10.209   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:10.209   11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:10.209    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:10.209    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.209    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.209    11:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:10.209    11:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.209   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:10.209    "name": "Existed_Raid",
00:10:10.209    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:10.209    "strip_size_kb": 64,
00:10:10.209    "state": "configuring",
00:10:10.209    "raid_level": "concat",
00:10:10.209    "superblock": true,
00:10:10.209    "num_base_bdevs": 3,
00:10:10.209    "num_base_bdevs_discovered": 1,
00:10:10.209    "num_base_bdevs_operational": 3,
00:10:10.209    "base_bdevs_list": [
00:10:10.209      {
00:10:10.209        "name": "BaseBdev1",
00:10:10.209        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:10.209        "is_configured": false,
00:10:10.209        "data_offset": 0,
00:10:10.209        "data_size": 0
00:10:10.209      },
00:10:10.209      {
00:10:10.209        "name": null,
00:10:10.209        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:10.209        "is_configured": false,
00:10:10.209        "data_offset": 0,
00:10:10.209        "data_size": 63488
00:10:10.209      },
00:10:10.209      {
00:10:10.209        "name": "BaseBdev3",
00:10:10.209        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:10.209        "is_configured": true,
00:10:10.209        "data_offset": 2048,
00:10:10.209        "data_size": 63488
00:10:10.209      }
00:10:10.209    ]
00:10:10.209  }'
00:10:10.209   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:10.209   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.468  [2024-12-16 11:31:36.485184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:10.468  BaseBdev1
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.468  [
00:10:10.468  {
00:10:10.468  "name": "BaseBdev1",
00:10:10.468  "aliases": [
00:10:10.468  "065d91b4-5aaf-4fe8-a27b-83d32fac9818"
00:10:10.468  ],
00:10:10.468  "product_name": "Malloc disk",
00:10:10.468  "block_size": 512,
00:10:10.468  "num_blocks": 65536,
00:10:10.468  "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:10.468  "assigned_rate_limits": {
00:10:10.468  "rw_ios_per_sec": 0,
00:10:10.468  "rw_mbytes_per_sec": 0,
00:10:10.468  "r_mbytes_per_sec": 0,
00:10:10.468  "w_mbytes_per_sec": 0
00:10:10.468  },
00:10:10.468  "claimed": true,
00:10:10.468  "claim_type": "exclusive_write",
00:10:10.468  "zoned": false,
00:10:10.468  "supported_io_types": {
00:10:10.468  "read": true,
00:10:10.468  "write": true,
00:10:10.468  "unmap": true,
00:10:10.468  "flush": true,
00:10:10.468  "reset": true,
00:10:10.468  "nvme_admin": false,
00:10:10.468  "nvme_io": false,
00:10:10.468  "nvme_io_md": false,
00:10:10.468  "write_zeroes": true,
00:10:10.468  "zcopy": true,
00:10:10.468  "get_zone_info": false,
00:10:10.468  "zone_management": false,
00:10:10.468  "zone_append": false,
00:10:10.468  "compare": false,
00:10:10.468  "compare_and_write": false,
00:10:10.468  "abort": true,
00:10:10.468  "seek_hole": false,
00:10:10.468  "seek_data": false,
00:10:10.468  "copy": true,
00:10:10.468  "nvme_iov_md": false
00:10:10.468  },
00:10:10.468  "memory_domains": [
00:10:10.468  {
00:10:10.468  "dma_device_id": "system",
00:10:10.468  "dma_device_type": 1
00:10:10.468  },
00:10:10.468  {
00:10:10.468  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:10.468  "dma_device_type": 2
00:10:10.468  }
00:10:10.468  ],
00:10:10.468  "driver_specific": {}
00:10:10.468  }
00:10:10.468  ]
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:10.468   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.468    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.728    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.728   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:10.728    "name": "Existed_Raid",
00:10:10.728    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:10.728    "strip_size_kb": 64,
00:10:10.728    "state": "configuring",
00:10:10.728    "raid_level": "concat",
00:10:10.728    "superblock": true,
00:10:10.728    "num_base_bdevs": 3,
00:10:10.728    "num_base_bdevs_discovered": 2,
00:10:10.728    "num_base_bdevs_operational": 3,
00:10:10.728    "base_bdevs_list": [
00:10:10.728      {
00:10:10.728        "name": "BaseBdev1",
00:10:10.728        "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:10.728        "is_configured": true,
00:10:10.728        "data_offset": 2048,
00:10:10.728        "data_size": 63488
00:10:10.728      },
00:10:10.728      {
00:10:10.728        "name": null,
00:10:10.728        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:10.728        "is_configured": false,
00:10:10.728        "data_offset": 0,
00:10:10.728        "data_size": 63488
00:10:10.728      },
00:10:10.728      {
00:10:10.728        "name": "BaseBdev3",
00:10:10.728        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:10.728        "is_configured": true,
00:10:10.728        "data_offset": 2048,
00:10:10.728        "data_size": 63488
00:10:10.728      }
00:10:10.728    ]
00:10:10.728  }'
00:10:10.728   11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:10.728   11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.988    11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:10.988    11:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:10.988    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.988    11:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.988    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:10.988  [2024-12-16 11:31:37.044342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:10.988   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:11.248    11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:11.248    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:11.248    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:11.248    11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:11.248    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:11.248   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:11.248    "name": "Existed_Raid",
00:10:11.248    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:11.248    "strip_size_kb": 64,
00:10:11.248    "state": "configuring",
00:10:11.248    "raid_level": "concat",
00:10:11.248    "superblock": true,
00:10:11.248    "num_base_bdevs": 3,
00:10:11.248    "num_base_bdevs_discovered": 1,
00:10:11.248    "num_base_bdevs_operational": 3,
00:10:11.248    "base_bdevs_list": [
00:10:11.248      {
00:10:11.248        "name": "BaseBdev1",
00:10:11.248        "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:11.248        "is_configured": true,
00:10:11.248        "data_offset": 2048,
00:10:11.248        "data_size": 63488
00:10:11.248      },
00:10:11.248      {
00:10:11.248        "name": null,
00:10:11.248        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:11.248        "is_configured": false,
00:10:11.248        "data_offset": 0,
00:10:11.248        "data_size": 63488
00:10:11.248      },
00:10:11.248      {
00:10:11.248        "name": null,
00:10:11.248        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:11.248        "is_configured": false,
00:10:11.248        "data_offset": 0,
00:10:11.248        "data_size": 63488
00:10:11.248      }
00:10:11.248    ]
00:10:11.248  }'
00:10:11.248   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:11.248   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:11.508  [2024-12-16 11:31:37.559511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:11.508   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:11.508    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:11.767    11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:11.767   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:11.767    "name": "Existed_Raid",
00:10:11.767    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:11.767    "strip_size_kb": 64,
00:10:11.767    "state": "configuring",
00:10:11.767    "raid_level": "concat",
00:10:11.767    "superblock": true,
00:10:11.767    "num_base_bdevs": 3,
00:10:11.767    "num_base_bdevs_discovered": 2,
00:10:11.767    "num_base_bdevs_operational": 3,
00:10:11.767    "base_bdevs_list": [
00:10:11.767      {
00:10:11.767        "name": "BaseBdev1",
00:10:11.768        "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:11.768        "is_configured": true,
00:10:11.768        "data_offset": 2048,
00:10:11.768        "data_size": 63488
00:10:11.768      },
00:10:11.768      {
00:10:11.768        "name": null,
00:10:11.768        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:11.768        "is_configured": false,
00:10:11.768        "data_offset": 0,
00:10:11.768        "data_size": 63488
00:10:11.768      },
00:10:11.768      {
00:10:11.768        "name": "BaseBdev3",
00:10:11.768        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:11.768        "is_configured": true,
00:10:11.768        "data_offset": 2048,
00:10:11.768        "data_size": 63488
00:10:11.768      }
00:10:11.768    ]
00:10:11.768  }'
00:10:11.768   11:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:11.768   11:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.027    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:12.027    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:12.027    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:12.027    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.027    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:12.027   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:10:12.027   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:10:12.027   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:12.027   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.027  [2024-12-16 11:31:38.086631] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:12.286    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:12.286    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:12.286    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:12.286    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.286    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:12.286    "name": "Existed_Raid",
00:10:12.286    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:12.286    "strip_size_kb": 64,
00:10:12.286    "state": "configuring",
00:10:12.286    "raid_level": "concat",
00:10:12.286    "superblock": true,
00:10:12.286    "num_base_bdevs": 3,
00:10:12.286    "num_base_bdevs_discovered": 1,
00:10:12.286    "num_base_bdevs_operational": 3,
00:10:12.286    "base_bdevs_list": [
00:10:12.286      {
00:10:12.286        "name": null,
00:10:12.286        "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:12.286        "is_configured": false,
00:10:12.286        "data_offset": 0,
00:10:12.286        "data_size": 63488
00:10:12.286      },
00:10:12.286      {
00:10:12.286        "name": null,
00:10:12.286        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:12.286        "is_configured": false,
00:10:12.286        "data_offset": 0,
00:10:12.286        "data_size": 63488
00:10:12.286      },
00:10:12.286      {
00:10:12.286        "name": "BaseBdev3",
00:10:12.286        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:12.286        "is_configured": true,
00:10:12.286        "data_offset": 2048,
00:10:12.286        "data_size": 63488
00:10:12.286      }
00:10:12.286    ]
00:10:12.286  }'
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:12.286   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.546    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:12.547    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:12.547    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:12.547    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.547    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:12.547   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:10:12.547   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:10:12.547   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:12.547   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.806  [2024-12-16 11:31:38.616362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:12.806    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:12.806    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:12.806    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:12.806    11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:12.806    11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:12.806    "name": "Existed_Raid",
00:10:12.806    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:12.806    "strip_size_kb": 64,
00:10:12.806    "state": "configuring",
00:10:12.806    "raid_level": "concat",
00:10:12.806    "superblock": true,
00:10:12.806    "num_base_bdevs": 3,
00:10:12.806    "num_base_bdevs_discovered": 2,
00:10:12.806    "num_base_bdevs_operational": 3,
00:10:12.806    "base_bdevs_list": [
00:10:12.806      {
00:10:12.806        "name": null,
00:10:12.806        "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:12.806        "is_configured": false,
00:10:12.806        "data_offset": 0,
00:10:12.806        "data_size": 63488
00:10:12.806      },
00:10:12.806      {
00:10:12.806        "name": "BaseBdev2",
00:10:12.806        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:12.806        "is_configured": true,
00:10:12.806        "data_offset": 2048,
00:10:12.806        "data_size": 63488
00:10:12.806      },
00:10:12.806      {
00:10:12.806        "name": "BaseBdev3",
00:10:12.806        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:12.806        "is_configured": true,
00:10:12.806        "data_offset": 2048,
00:10:12.806        "data_size": 63488
00:10:12.806      }
00:10:12.806    ]
00:10:12.806  }'
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:12.806   11:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.065    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:13.065    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.065    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.065    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:13.065    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:10:13.324    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:13.324    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:10:13.324    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.324    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.324    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 065d91b4-5aaf-4fe8-a27b-83d32fac9818
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.324  [2024-12-16 11:31:39.206342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:10:13.324  [2024-12-16 11:31:39.206524] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:13.324  [2024-12-16 11:31:39.206553] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:10:13.324  [2024-12-16 11:31:39.206811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:10:13.324  [2024-12-16 11:31:39.206947] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:13.324  NewBaseBdev
00:10:13.324  [2024-12-16 11:31:39.206957] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:10:13.324  [2024-12-16 11:31:39.207062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.324  [
00:10:13.324  {
00:10:13.324  "name": "NewBaseBdev",
00:10:13.324  "aliases": [
00:10:13.324  "065d91b4-5aaf-4fe8-a27b-83d32fac9818"
00:10:13.324  ],
00:10:13.324  "product_name": "Malloc disk",
00:10:13.324  "block_size": 512,
00:10:13.324  "num_blocks": 65536,
00:10:13.324  "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:13.324  "assigned_rate_limits": {
00:10:13.324  "rw_ios_per_sec": 0,
00:10:13.324  "rw_mbytes_per_sec": 0,
00:10:13.324  "r_mbytes_per_sec": 0,
00:10:13.324  "w_mbytes_per_sec": 0
00:10:13.324  },
00:10:13.324  "claimed": true,
00:10:13.324  "claim_type": "exclusive_write",
00:10:13.324  "zoned": false,
00:10:13.324  "supported_io_types": {
00:10:13.324  "read": true,
00:10:13.324  "write": true,
00:10:13.324  "unmap": true,
00:10:13.324  "flush": true,
00:10:13.324  "reset": true,
00:10:13.324  "nvme_admin": false,
00:10:13.324  "nvme_io": false,
00:10:13.324  "nvme_io_md": false,
00:10:13.324  "write_zeroes": true,
00:10:13.324  "zcopy": true,
00:10:13.324  "get_zone_info": false,
00:10:13.324  "zone_management": false,
00:10:13.324  "zone_append": false,
00:10:13.324  "compare": false,
00:10:13.324  "compare_and_write": false,
00:10:13.324  "abort": true,
00:10:13.324  "seek_hole": false,
00:10:13.324  "seek_data": false,
00:10:13.324  "copy": true,
00:10:13.324  "nvme_iov_md": false
00:10:13.324  },
00:10:13.324  "memory_domains": [
00:10:13.324  {
00:10:13.324  "dma_device_id": "system",
00:10:13.324  "dma_device_type": 1
00:10:13.324  },
00:10:13.324  {
00:10:13.324  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:13.324  "dma_device_type": 2
00:10:13.324  }
00:10:13.324  ],
00:10:13.324  "driver_specific": {}
00:10:13.324  }
00:10:13.324  ]
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.324   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:13.325    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:13.325    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:13.325    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.325    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.325    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:13.325    "name": "Existed_Raid",
00:10:13.325    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:13.325    "strip_size_kb": 64,
00:10:13.325    "state": "online",
00:10:13.325    "raid_level": "concat",
00:10:13.325    "superblock": true,
00:10:13.325    "num_base_bdevs": 3,
00:10:13.325    "num_base_bdevs_discovered": 3,
00:10:13.325    "num_base_bdevs_operational": 3,
00:10:13.325    "base_bdevs_list": [
00:10:13.325      {
00:10:13.325        "name": "NewBaseBdev",
00:10:13.325        "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:13.325        "is_configured": true,
00:10:13.325        "data_offset": 2048,
00:10:13.325        "data_size": 63488
00:10:13.325      },
00:10:13.325      {
00:10:13.325        "name": "BaseBdev2",
00:10:13.325        "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:13.325        "is_configured": true,
00:10:13.325        "data_offset": 2048,
00:10:13.325        "data_size": 63488
00:10:13.325      },
00:10:13.325      {
00:10:13.325        "name": "BaseBdev3",
00:10:13.325        "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:13.325        "is_configured": true,
00:10:13.325        "data_offset": 2048,
00:10:13.325        "data_size": 63488
00:10:13.325      }
00:10:13.325    ]
00:10:13.325  }'
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:13.325   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.895  [2024-12-16 11:31:39.689915] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:13.895    "name": "Existed_Raid",
00:10:13.895    "aliases": [
00:10:13.895      "6babad96-a726-4ea4-a783-d87fee0e851f"
00:10:13.895    ],
00:10:13.895    "product_name": "Raid Volume",
00:10:13.895    "block_size": 512,
00:10:13.895    "num_blocks": 190464,
00:10:13.895    "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:13.895    "assigned_rate_limits": {
00:10:13.895      "rw_ios_per_sec": 0,
00:10:13.895      "rw_mbytes_per_sec": 0,
00:10:13.895      "r_mbytes_per_sec": 0,
00:10:13.895      "w_mbytes_per_sec": 0
00:10:13.895    },
00:10:13.895    "claimed": false,
00:10:13.895    "zoned": false,
00:10:13.895    "supported_io_types": {
00:10:13.895      "read": true,
00:10:13.895      "write": true,
00:10:13.895      "unmap": true,
00:10:13.895      "flush": true,
00:10:13.895      "reset": true,
00:10:13.895      "nvme_admin": false,
00:10:13.895      "nvme_io": false,
00:10:13.895      "nvme_io_md": false,
00:10:13.895      "write_zeroes": true,
00:10:13.895      "zcopy": false,
00:10:13.895      "get_zone_info": false,
00:10:13.895      "zone_management": false,
00:10:13.895      "zone_append": false,
00:10:13.895      "compare": false,
00:10:13.895      "compare_and_write": false,
00:10:13.895      "abort": false,
00:10:13.895      "seek_hole": false,
00:10:13.895      "seek_data": false,
00:10:13.895      "copy": false,
00:10:13.895      "nvme_iov_md": false
00:10:13.895    },
00:10:13.895    "memory_domains": [
00:10:13.895      {
00:10:13.895        "dma_device_id": "system",
00:10:13.895        "dma_device_type": 1
00:10:13.895      },
00:10:13.895      {
00:10:13.895        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:13.895        "dma_device_type": 2
00:10:13.895      },
00:10:13.895      {
00:10:13.895        "dma_device_id": "system",
00:10:13.895        "dma_device_type": 1
00:10:13.895      },
00:10:13.895      {
00:10:13.895        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:13.895        "dma_device_type": 2
00:10:13.895      },
00:10:13.895      {
00:10:13.895        "dma_device_id": "system",
00:10:13.895        "dma_device_type": 1
00:10:13.895      },
00:10:13.895      {
00:10:13.895        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:13.895        "dma_device_type": 2
00:10:13.895      }
00:10:13.895    ],
00:10:13.895    "driver_specific": {
00:10:13.895      "raid": {
00:10:13.895        "uuid": "6babad96-a726-4ea4-a783-d87fee0e851f",
00:10:13.895        "strip_size_kb": 64,
00:10:13.895        "state": "online",
00:10:13.895        "raid_level": "concat",
00:10:13.895        "superblock": true,
00:10:13.895        "num_base_bdevs": 3,
00:10:13.895        "num_base_bdevs_discovered": 3,
00:10:13.895        "num_base_bdevs_operational": 3,
00:10:13.895        "base_bdevs_list": [
00:10:13.895          {
00:10:13.895            "name": "NewBaseBdev",
00:10:13.895            "uuid": "065d91b4-5aaf-4fe8-a27b-83d32fac9818",
00:10:13.895            "is_configured": true,
00:10:13.895            "data_offset": 2048,
00:10:13.895            "data_size": 63488
00:10:13.895          },
00:10:13.895          {
00:10:13.895            "name": "BaseBdev2",
00:10:13.895            "uuid": "5fff9346-d757-4034-ba49-81c06d283919",
00:10:13.895            "is_configured": true,
00:10:13.895            "data_offset": 2048,
00:10:13.895            "data_size": 63488
00:10:13.895          },
00:10:13.895          {
00:10:13.895            "name": "BaseBdev3",
00:10:13.895            "uuid": "9034dcd4-77e2-4d22-9748-187a2bc0528c",
00:10:13.895            "is_configured": true,
00:10:13.895            "data_offset": 2048,
00:10:13.895            "data_size": 63488
00:10:13.895          }
00:10:13.895        ]
00:10:13.895      }
00:10:13.895    }
00:10:13.895  }'
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:10:13.895  BaseBdev2
00:10:13.895  BaseBdev3'
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:13.895    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:13.895   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:14.155  [2024-12-16 11:31:39.965154] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:14.155  [2024-12-16 11:31:39.965249] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:14.155  [2024-12-16 11:31:39.965356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:14.155  [2024-12-16 11:31:39.965414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:14.155  [2024-12-16 11:31:39.965426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:10:14.155   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:14.155   11:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77641
00:10:14.155   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77641 ']'
00:10:14.155   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77641
00:10:14.155    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:10:14.155   11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:14.155    11:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77641
00:10:14.155   11:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:14.155   11:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:14.155   11:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77641'
00:10:14.155  killing process with pid 77641
00:10:14.155   11:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77641
00:10:14.155  [2024-12-16 11:31:40.013964] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:14.155   11:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77641
00:10:14.155  [2024-12-16 11:31:40.046181] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:14.415   11:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:10:14.415  
00:10:14.415  real	0m9.506s
00:10:14.415  user	0m16.155s
00:10:14.415  sys	0m2.027s
00:10:14.415   11:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:14.415  ************************************
00:10:14.415  END TEST raid_state_function_test_sb
00:10:14.415  ************************************
00:10:14.415   11:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:14.415   11:31:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3
00:10:14.415   11:31:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:10:14.415   11:31:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:14.415   11:31:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:14.415  ************************************
00:10:14.415  START TEST raid_superblock_test
00:10:14.415  ************************************
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']'
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78256
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78256
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78256 ']'
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:14.415  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:14.415   11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:14.415  [2024-12-16 11:31:40.463933] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:14.415  [2024-12-16 11:31:40.464158] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78256 ]
00:10:14.675  [2024-12-16 11:31:40.608482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:14.675  [2024-12-16 11:31:40.658816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:14.675  [2024-12-16 11:31:40.701423] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:14.675  [2024-12-16 11:31:40.701533] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.618  malloc1
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.618  [2024-12-16 11:31:41.371960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:10:15.618  [2024-12-16 11:31:41.372113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:15.618  [2024-12-16 11:31:41.372160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:10:15.618  [2024-12-16 11:31:41.372211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:15.618  [2024-12-16 11:31:41.374327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:15.618  [2024-12-16 11:31:41.374405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:10:15.618  pt1
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.618  malloc2
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.618  [2024-12-16 11:31:41.411978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:10:15.618  [2024-12-16 11:31:41.412044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:15.618  [2024-12-16 11:31:41.412063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:10:15.618  [2024-12-16 11:31:41.412074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:15.618  [2024-12-16 11:31:41.414242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:15.618  [2024-12-16 11:31:41.414284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:10:15.618  pt2
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:10:15.618   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.619  malloc3
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.619  [2024-12-16 11:31:41.440616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:10:15.619  [2024-12-16 11:31:41.440675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:15.619  [2024-12-16 11:31:41.440693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:10:15.619  [2024-12-16 11:31:41.440703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:15.619  [2024-12-16 11:31:41.442746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:15.619  [2024-12-16 11:31:41.442862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:10:15.619  pt3
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.619  [2024-12-16 11:31:41.452614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:10:15.619  [2024-12-16 11:31:41.454440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:15.619  [2024-12-16 11:31:41.454566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:10:15.619  [2024-12-16 11:31:41.454712] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:10:15.619  [2024-12-16 11:31:41.454724] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:10:15.619  [2024-12-16 11:31:41.454968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:15.619  [2024-12-16 11:31:41.455085] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:10:15.619  [2024-12-16 11:31:41.455099] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:10:15.619  [2024-12-16 11:31:41.455222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:15.619    11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:15.619    11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.619    11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:15.619    11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.619    11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:15.619    "name": "raid_bdev1",
00:10:15.619    "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:15.619    "strip_size_kb": 64,
00:10:15.619    "state": "online",
00:10:15.619    "raid_level": "concat",
00:10:15.619    "superblock": true,
00:10:15.619    "num_base_bdevs": 3,
00:10:15.619    "num_base_bdevs_discovered": 3,
00:10:15.619    "num_base_bdevs_operational": 3,
00:10:15.619    "base_bdevs_list": [
00:10:15.619      {
00:10:15.619        "name": "pt1",
00:10:15.619        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:15.619        "is_configured": true,
00:10:15.619        "data_offset": 2048,
00:10:15.619        "data_size": 63488
00:10:15.619      },
00:10:15.619      {
00:10:15.619        "name": "pt2",
00:10:15.619        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:15.619        "is_configured": true,
00:10:15.619        "data_offset": 2048,
00:10:15.619        "data_size": 63488
00:10:15.619      },
00:10:15.619      {
00:10:15.619        "name": "pt3",
00:10:15.619        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:15.619        "is_configured": true,
00:10:15.619        "data_offset": 2048,
00:10:15.619        "data_size": 63488
00:10:15.619      }
00:10:15.619    ]
00:10:15.619  }'
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:15.619   11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.879   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:10:15.879   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:10:15.879   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:15.879   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:15.879   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:10:15.879   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:15.879    11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:15.879    11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:15.879    11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:15.879    11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:15.879  [2024-12-16 11:31:41.924141] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:16.139    11:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.139   11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:16.139    "name": "raid_bdev1",
00:10:16.139    "aliases": [
00:10:16.139      "1cd215a0-938e-421e-a3b6-ea32fffc8903"
00:10:16.139    ],
00:10:16.139    "product_name": "Raid Volume",
00:10:16.139    "block_size": 512,
00:10:16.139    "num_blocks": 190464,
00:10:16.139    "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:16.139    "assigned_rate_limits": {
00:10:16.139      "rw_ios_per_sec": 0,
00:10:16.139      "rw_mbytes_per_sec": 0,
00:10:16.139      "r_mbytes_per_sec": 0,
00:10:16.139      "w_mbytes_per_sec": 0
00:10:16.139    },
00:10:16.139    "claimed": false,
00:10:16.139    "zoned": false,
00:10:16.139    "supported_io_types": {
00:10:16.139      "read": true,
00:10:16.139      "write": true,
00:10:16.139      "unmap": true,
00:10:16.139      "flush": true,
00:10:16.139      "reset": true,
00:10:16.139      "nvme_admin": false,
00:10:16.139      "nvme_io": false,
00:10:16.139      "nvme_io_md": false,
00:10:16.139      "write_zeroes": true,
00:10:16.139      "zcopy": false,
00:10:16.139      "get_zone_info": false,
00:10:16.139      "zone_management": false,
00:10:16.139      "zone_append": false,
00:10:16.139      "compare": false,
00:10:16.139      "compare_and_write": false,
00:10:16.139      "abort": false,
00:10:16.139      "seek_hole": false,
00:10:16.139      "seek_data": false,
00:10:16.139      "copy": false,
00:10:16.139      "nvme_iov_md": false
00:10:16.139    },
00:10:16.139    "memory_domains": [
00:10:16.139      {
00:10:16.139        "dma_device_id": "system",
00:10:16.139        "dma_device_type": 1
00:10:16.139      },
00:10:16.139      {
00:10:16.139        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:16.139        "dma_device_type": 2
00:10:16.139      },
00:10:16.139      {
00:10:16.139        "dma_device_id": "system",
00:10:16.139        "dma_device_type": 1
00:10:16.139      },
00:10:16.139      {
00:10:16.139        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:16.139        "dma_device_type": 2
00:10:16.139      },
00:10:16.139      {
00:10:16.139        "dma_device_id": "system",
00:10:16.139        "dma_device_type": 1
00:10:16.139      },
00:10:16.139      {
00:10:16.139        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:16.139        "dma_device_type": 2
00:10:16.139      }
00:10:16.139    ],
00:10:16.139    "driver_specific": {
00:10:16.139      "raid": {
00:10:16.139        "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:16.139        "strip_size_kb": 64,
00:10:16.139        "state": "online",
00:10:16.139        "raid_level": "concat",
00:10:16.139        "superblock": true,
00:10:16.139        "num_base_bdevs": 3,
00:10:16.139        "num_base_bdevs_discovered": 3,
00:10:16.139        "num_base_bdevs_operational": 3,
00:10:16.139        "base_bdevs_list": [
00:10:16.139          {
00:10:16.139            "name": "pt1",
00:10:16.139            "uuid": "00000000-0000-0000-0000-000000000001",
00:10:16.139            "is_configured": true,
00:10:16.139            "data_offset": 2048,
00:10:16.139            "data_size": 63488
00:10:16.139          },
00:10:16.139          {
00:10:16.139            "name": "pt2",
00:10:16.139            "uuid": "00000000-0000-0000-0000-000000000002",
00:10:16.139            "is_configured": true,
00:10:16.139            "data_offset": 2048,
00:10:16.139            "data_size": 63488
00:10:16.139          },
00:10:16.139          {
00:10:16.139            "name": "pt3",
00:10:16.139            "uuid": "00000000-0000-0000-0000-000000000003",
00:10:16.139            "is_configured": true,
00:10:16.139            "data_offset": 2048,
00:10:16.139            "data_size": 63488
00:10:16.139          }
00:10:16.139        ]
00:10:16.139      }
00:10:16.139    }
00:10:16.139  }'
00:10:16.139    11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:10:16.139  pt2
00:10:16.139  pt3'
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:16.139   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:10:16.139    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.139  [2024-12-16 11:31:42.187698] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:16.400    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1cd215a0-938e-421e-a3b6-ea32fffc8903
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1cd215a0-938e-421e-a3b6-ea32fffc8903 ']'
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.400  [2024-12-16 11:31:42.235271] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:16.400  [2024-12-16 11:31:42.235344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:16.400  [2024-12-16 11:31:42.235465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:16.400  [2024-12-16 11:31:42.235538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:16.400  [2024-12-16 11:31:42.235572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.400    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:16.400    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:10:16.400    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.400    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.400    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:10:16.400   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401  [2024-12-16 11:31:42.363075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:10:16.401  [2024-12-16 11:31:42.365230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:10:16.401  [2024-12-16 11:31:42.365326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:10:16.401  [2024-12-16 11:31:42.365399] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:10:16.401  [2024-12-16 11:31:42.365484] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:10:16.401  [2024-12-16 11:31:42.365548] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:10:16.401  [2024-12-16 11:31:42.365621] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:16.401  [2024-12-16 11:31:42.365671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:10:16.401  request:
00:10:16.401  {
00:10:16.401  "name": "raid_bdev1",
00:10:16.401  "raid_level": "concat",
00:10:16.401  "base_bdevs": [
00:10:16.401  "malloc1",
00:10:16.401  "malloc2",
00:10:16.401  "malloc3"
00:10:16.401  ],
00:10:16.401  "strip_size_kb": 64,
00:10:16.401  "superblock": false,
00:10:16.401  "method": "bdev_raid_create",
00:10:16.401  "req_id": 1
00:10:16.401  }
00:10:16.401  Got JSON-RPC error response
00:10:16.401  response:
00:10:16.401  {
00:10:16.401  "code": -17,
00:10:16.401  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:10:16.401  }
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401  [2024-12-16 11:31:42.430936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:10:16.401  [2024-12-16 11:31:42.431049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:16.401  [2024-12-16 11:31:42.431092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:10:16.401  [2024-12-16 11:31:42.431125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:16.401  [2024-12-16 11:31:42.433392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:16.401  [2024-12-16 11:31:42.433470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:10:16.401  [2024-12-16 11:31:42.433577] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:10:16.401  [2024-12-16 11:31:42.433641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:10:16.401  pt1
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:16.401   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:16.401    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.661   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:16.661    "name": "raid_bdev1",
00:10:16.661    "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:16.661    "strip_size_kb": 64,
00:10:16.661    "state": "configuring",
00:10:16.661    "raid_level": "concat",
00:10:16.661    "superblock": true,
00:10:16.662    "num_base_bdevs": 3,
00:10:16.662    "num_base_bdevs_discovered": 1,
00:10:16.662    "num_base_bdevs_operational": 3,
00:10:16.662    "base_bdevs_list": [
00:10:16.662      {
00:10:16.662        "name": "pt1",
00:10:16.662        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:16.662        "is_configured": true,
00:10:16.662        "data_offset": 2048,
00:10:16.662        "data_size": 63488
00:10:16.662      },
00:10:16.662      {
00:10:16.662        "name": null,
00:10:16.662        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:16.662        "is_configured": false,
00:10:16.662        "data_offset": 2048,
00:10:16.662        "data_size": 63488
00:10:16.662      },
00:10:16.662      {
00:10:16.662        "name": null,
00:10:16.662        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:16.662        "is_configured": false,
00:10:16.662        "data_offset": 2048,
00:10:16.662        "data_size": 63488
00:10:16.662      }
00:10:16.662    ]
00:10:16.662  }'
00:10:16.662   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:16.662   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']'
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.921  [2024-12-16 11:31:42.838285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:10:16.921  [2024-12-16 11:31:42.838453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:16.921  [2024-12-16 11:31:42.838493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:10:16.921  [2024-12-16 11:31:42.838528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:16.921  [2024-12-16 11:31:42.838958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:16.921  [2024-12-16 11:31:42.839025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:10:16.921  [2024-12-16 11:31:42.839130] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:10:16.921  [2024-12-16 11:31:42.839182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:16.921  pt2
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.921   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.921  [2024-12-16 11:31:42.850273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:16.922    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:16.922    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:16.922    11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:16.922    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:16.922    11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:16.922    "name": "raid_bdev1",
00:10:16.922    "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:16.922    "strip_size_kb": 64,
00:10:16.922    "state": "configuring",
00:10:16.922    "raid_level": "concat",
00:10:16.922    "superblock": true,
00:10:16.922    "num_base_bdevs": 3,
00:10:16.922    "num_base_bdevs_discovered": 1,
00:10:16.922    "num_base_bdevs_operational": 3,
00:10:16.922    "base_bdevs_list": [
00:10:16.922      {
00:10:16.922        "name": "pt1",
00:10:16.922        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:16.922        "is_configured": true,
00:10:16.922        "data_offset": 2048,
00:10:16.922        "data_size": 63488
00:10:16.922      },
00:10:16.922      {
00:10:16.922        "name": null,
00:10:16.922        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:16.922        "is_configured": false,
00:10:16.922        "data_offset": 0,
00:10:16.922        "data_size": 63488
00:10:16.922      },
00:10:16.922      {
00:10:16.922        "name": null,
00:10:16.922        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:16.922        "is_configured": false,
00:10:16.922        "data_offset": 2048,
00:10:16.922        "data_size": 63488
00:10:16.922      }
00:10:16.922    ]
00:10:16.922  }'
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:16.922   11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:17.492  [2024-12-16 11:31:43.289538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:10:17.492  [2024-12-16 11:31:43.289637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:17.492  [2024-12-16 11:31:43.289660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:10:17.492  [2024-12-16 11:31:43.289670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:17.492  [2024-12-16 11:31:43.290063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:17.492  [2024-12-16 11:31:43.290081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:10:17.492  [2024-12-16 11:31:43.290157] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:10:17.492  [2024-12-16 11:31:43.290179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:17.492  pt2
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:17.492  [2024-12-16 11:31:43.301488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:10:17.492  [2024-12-16 11:31:43.301642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:17.492  [2024-12-16 11:31:43.301669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:10:17.492  [2024-12-16 11:31:43.301678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:17.492  [2024-12-16 11:31:43.302068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:17.492  [2024-12-16 11:31:43.302087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:10:17.492  [2024-12-16 11:31:43.302155] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:10:17.492  [2024-12-16 11:31:43.302174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:10:17.492  [2024-12-16 11:31:43.302288] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:10:17.492  [2024-12-16 11:31:43.302298] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:10:17.492  [2024-12-16 11:31:43.302541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:10:17.492  [2024-12-16 11:31:43.302669] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:10:17.492  [2024-12-16 11:31:43.302682] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:10:17.492  [2024-12-16 11:31:43.302785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:17.492  pt3
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:17.492   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:17.493    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:17.493    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:17.493    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:17.493    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:17.493    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:17.493    "name": "raid_bdev1",
00:10:17.493    "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:17.493    "strip_size_kb": 64,
00:10:17.493    "state": "online",
00:10:17.493    "raid_level": "concat",
00:10:17.493    "superblock": true,
00:10:17.493    "num_base_bdevs": 3,
00:10:17.493    "num_base_bdevs_discovered": 3,
00:10:17.493    "num_base_bdevs_operational": 3,
00:10:17.493    "base_bdevs_list": [
00:10:17.493      {
00:10:17.493        "name": "pt1",
00:10:17.493        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:17.493        "is_configured": true,
00:10:17.493        "data_offset": 2048,
00:10:17.493        "data_size": 63488
00:10:17.493      },
00:10:17.493      {
00:10:17.493        "name": "pt2",
00:10:17.493        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:17.493        "is_configured": true,
00:10:17.493        "data_offset": 2048,
00:10:17.493        "data_size": 63488
00:10:17.493      },
00:10:17.493      {
00:10:17.493        "name": "pt3",
00:10:17.493        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:17.493        "is_configured": true,
00:10:17.493        "data_offset": 2048,
00:10:17.493        "data_size": 63488
00:10:17.493      }
00:10:17.493    ]
00:10:17.493  }'
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:17.493   11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:17.752   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:10:17.752   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:10:17.752   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:17.752   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:17.752   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:10:17.752   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:17.752    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:17.752    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:17.752    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:17.752    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:17.752  [2024-12-16 11:31:43.749000] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:17.752    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:17.752   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:17.752    "name": "raid_bdev1",
00:10:17.752    "aliases": [
00:10:17.752      "1cd215a0-938e-421e-a3b6-ea32fffc8903"
00:10:17.752    ],
00:10:17.752    "product_name": "Raid Volume",
00:10:17.752    "block_size": 512,
00:10:17.752    "num_blocks": 190464,
00:10:17.752    "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:17.752    "assigned_rate_limits": {
00:10:17.752      "rw_ios_per_sec": 0,
00:10:17.752      "rw_mbytes_per_sec": 0,
00:10:17.752      "r_mbytes_per_sec": 0,
00:10:17.752      "w_mbytes_per_sec": 0
00:10:17.752    },
00:10:17.752    "claimed": false,
00:10:17.752    "zoned": false,
00:10:17.752    "supported_io_types": {
00:10:17.753      "read": true,
00:10:17.753      "write": true,
00:10:17.753      "unmap": true,
00:10:17.753      "flush": true,
00:10:17.753      "reset": true,
00:10:17.753      "nvme_admin": false,
00:10:17.753      "nvme_io": false,
00:10:17.753      "nvme_io_md": false,
00:10:17.753      "write_zeroes": true,
00:10:17.753      "zcopy": false,
00:10:17.753      "get_zone_info": false,
00:10:17.753      "zone_management": false,
00:10:17.753      "zone_append": false,
00:10:17.753      "compare": false,
00:10:17.753      "compare_and_write": false,
00:10:17.753      "abort": false,
00:10:17.753      "seek_hole": false,
00:10:17.753      "seek_data": false,
00:10:17.753      "copy": false,
00:10:17.753      "nvme_iov_md": false
00:10:17.753    },
00:10:17.753    "memory_domains": [
00:10:17.753      {
00:10:17.753        "dma_device_id": "system",
00:10:17.753        "dma_device_type": 1
00:10:17.753      },
00:10:17.753      {
00:10:17.753        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:17.753        "dma_device_type": 2
00:10:17.753      },
00:10:17.753      {
00:10:17.753        "dma_device_id": "system",
00:10:17.753        "dma_device_type": 1
00:10:17.753      },
00:10:17.753      {
00:10:17.753        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:17.753        "dma_device_type": 2
00:10:17.753      },
00:10:17.753      {
00:10:17.753        "dma_device_id": "system",
00:10:17.753        "dma_device_type": 1
00:10:17.753      },
00:10:17.753      {
00:10:17.753        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:17.753        "dma_device_type": 2
00:10:17.753      }
00:10:17.753    ],
00:10:17.753    "driver_specific": {
00:10:17.753      "raid": {
00:10:17.753        "uuid": "1cd215a0-938e-421e-a3b6-ea32fffc8903",
00:10:17.753        "strip_size_kb": 64,
00:10:17.753        "state": "online",
00:10:17.753        "raid_level": "concat",
00:10:17.753        "superblock": true,
00:10:17.753        "num_base_bdevs": 3,
00:10:17.753        "num_base_bdevs_discovered": 3,
00:10:17.753        "num_base_bdevs_operational": 3,
00:10:17.753        "base_bdevs_list": [
00:10:17.753          {
00:10:17.753            "name": "pt1",
00:10:17.753            "uuid": "00000000-0000-0000-0000-000000000001",
00:10:17.753            "is_configured": true,
00:10:17.753            "data_offset": 2048,
00:10:17.753            "data_size": 63488
00:10:17.753          },
00:10:17.753          {
00:10:17.753            "name": "pt2",
00:10:17.753            "uuid": "00000000-0000-0000-0000-000000000002",
00:10:17.753            "is_configured": true,
00:10:17.753            "data_offset": 2048,
00:10:17.753            "data_size": 63488
00:10:17.753          },
00:10:17.753          {
00:10:17.753            "name": "pt3",
00:10:17.753            "uuid": "00000000-0000-0000-0000-000000000003",
00:10:17.753            "is_configured": true,
00:10:17.753            "data_offset": 2048,
00:10:17.753            "data_size": 63488
00:10:17.753          }
00:10:17.753        ]
00:10:17.753      }
00:10:17.753    }
00:10:17.753  }'
00:10:17.753    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:10:18.012  pt2
00:10:18.012  pt3'
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:18.012   11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:18.012    11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:18.012    11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:18.012   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:18.012   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:18.012    11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:18.012    11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:18.012    11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:18.012    11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:10:18.012  [2024-12-16 11:31:44.036484] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:18.012    11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:18.012   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1cd215a0-938e-421e-a3b6-ea32fffc8903 '!=' 1cd215a0-938e-421e-a3b6-ea32fffc8903 ']'
00:10:18.012   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78256
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78256 ']'
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78256
00:10:18.272    11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:18.272    11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78256
00:10:18.272  killing process with pid 78256
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78256'
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78256
00:10:18.272  [2024-12-16 11:31:44.114370] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:18.272  [2024-12-16 11:31:44.114471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:18.272  [2024-12-16 11:31:44.114555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:18.272   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78256
00:10:18.272  [2024-12-16 11:31:44.114565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:10:18.272  [2024-12-16 11:31:44.149182] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:18.532   11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:10:18.532  
00:10:18.532  real	0m4.019s
00:10:18.532  user	0m6.278s
00:10:18.532  sys	0m0.932s
00:10:18.532   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:18.532   11:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:18.532  ************************************
00:10:18.532  END TEST raid_superblock_test
00:10:18.532  ************************************
00:10:18.532   11:31:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read
00:10:18.532   11:31:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:18.532   11:31:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:18.532   11:31:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:18.532  ************************************
00:10:18.532  START TEST raid_read_error_test
00:10:18.532  ************************************
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']'
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:10:18.532    11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZiEnrwP8yI
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78498
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78498
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78498 ']'
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:18.532  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:18.532   11:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:18.532  [2024-12-16 11:31:44.544624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:18.532  [2024-12-16 11:31:44.544740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78498 ]
00:10:18.791  [2024-12-16 11:31:44.705347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:18.791  [2024-12-16 11:31:44.752560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:18.791  [2024-12-16 11:31:44.794510] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:18.791  [2024-12-16 11:31:44.794555] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.729  BaseBdev1_malloc
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.729  true
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.729  [2024-12-16 11:31:45.492971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:10:19.729  [2024-12-16 11:31:45.493030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:19.729  [2024-12-16 11:31:45.493060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:10:19.729  [2024-12-16 11:31:45.493070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:19.729  [2024-12-16 11:31:45.495486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:19.729  [2024-12-16 11:31:45.495528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:10:19.729  BaseBdev1
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.729  BaseBdev2_malloc
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.729  true
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.729  [2024-12-16 11:31:45.541099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:10:19.729  [2024-12-16 11:31:45.541197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:19.729  [2024-12-16 11:31:45.541220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:10:19.729  [2024-12-16 11:31:45.541229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:19.729  [2024-12-16 11:31:45.543315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:19.729  [2024-12-16 11:31:45.543355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:10:19.729  BaseBdev2
00:10:19.729   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.730  BaseBdev3_malloc
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.730  true
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.730  [2024-12-16 11:31:45.581819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:10:19.730  [2024-12-16 11:31:45.581873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:19.730  [2024-12-16 11:31:45.581893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:10:19.730  [2024-12-16 11:31:45.581902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:19.730  [2024-12-16 11:31:45.584042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:19.730  [2024-12-16 11:31:45.584152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:10:19.730  BaseBdev3
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.730  [2024-12-16 11:31:45.593888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:19.730  [2024-12-16 11:31:45.596054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:19.730  [2024-12-16 11:31:45.596150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:19.730  [2024-12-16 11:31:45.596363] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:19.730  [2024-12-16 11:31:45.596395] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:10:19.730  [2024-12-16 11:31:45.596697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:19.730  [2024-12-16 11:31:45.596864] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:19.730  [2024-12-16 11:31:45.596876] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:10:19.730  [2024-12-16 11:31:45.597041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:19.730    11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:19.730    11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:19.730    11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:19.730    11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.730    11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:19.730    "name": "raid_bdev1",
00:10:19.730    "uuid": "fc023749-ac80-4a5c-958c-d8e69069c011",
00:10:19.730    "strip_size_kb": 64,
00:10:19.730    "state": "online",
00:10:19.730    "raid_level": "concat",
00:10:19.730    "superblock": true,
00:10:19.730    "num_base_bdevs": 3,
00:10:19.730    "num_base_bdevs_discovered": 3,
00:10:19.730    "num_base_bdevs_operational": 3,
00:10:19.730    "base_bdevs_list": [
00:10:19.730      {
00:10:19.730        "name": "BaseBdev1",
00:10:19.730        "uuid": "e6700d73-6e3f-5242-ba96-e43653107478",
00:10:19.730        "is_configured": true,
00:10:19.730        "data_offset": 2048,
00:10:19.730        "data_size": 63488
00:10:19.730      },
00:10:19.730      {
00:10:19.730        "name": "BaseBdev2",
00:10:19.730        "uuid": "269012f3-fa66-5ea5-a413-3645374b989a",
00:10:19.730        "is_configured": true,
00:10:19.730        "data_offset": 2048,
00:10:19.730        "data_size": 63488
00:10:19.730      },
00:10:19.730      {
00:10:19.730        "name": "BaseBdev3",
00:10:19.730        "uuid": "f3ecbf14-3364-56ef-9917-d29d9c8a206a",
00:10:19.730        "is_configured": true,
00:10:19.730        "data_offset": 2048,
00:10:19.730        "data_size": 63488
00:10:19.730      }
00:10:19.730    ]
00:10:19.730  }'
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:19.730   11:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:19.991   11:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:10:19.992   11:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:10:20.252  [2024-12-16 11:31:46.149269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]]
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:21.190    11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:21.190    11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:21.190    11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:21.190    11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:21.190    11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:21.190   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:21.190    "name": "raid_bdev1",
00:10:21.190    "uuid": "fc023749-ac80-4a5c-958c-d8e69069c011",
00:10:21.190    "strip_size_kb": 64,
00:10:21.190    "state": "online",
00:10:21.190    "raid_level": "concat",
00:10:21.190    "superblock": true,
00:10:21.190    "num_base_bdevs": 3,
00:10:21.190    "num_base_bdevs_discovered": 3,
00:10:21.190    "num_base_bdevs_operational": 3,
00:10:21.191    "base_bdevs_list": [
00:10:21.191      {
00:10:21.191        "name": "BaseBdev1",
00:10:21.191        "uuid": "e6700d73-6e3f-5242-ba96-e43653107478",
00:10:21.191        "is_configured": true,
00:10:21.191        "data_offset": 2048,
00:10:21.191        "data_size": 63488
00:10:21.191      },
00:10:21.191      {
00:10:21.191        "name": "BaseBdev2",
00:10:21.191        "uuid": "269012f3-fa66-5ea5-a413-3645374b989a",
00:10:21.191        "is_configured": true,
00:10:21.191        "data_offset": 2048,
00:10:21.191        "data_size": 63488
00:10:21.191      },
00:10:21.191      {
00:10:21.191        "name": "BaseBdev3",
00:10:21.191        "uuid": "f3ecbf14-3364-56ef-9917-d29d9c8a206a",
00:10:21.191        "is_configured": true,
00:10:21.191        "data_offset": 2048,
00:10:21.191        "data_size": 63488
00:10:21.191      }
00:10:21.191    ]
00:10:21.191  }'
00:10:21.191   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:21.191   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:21.760  [2024-12-16 11:31:47.549460] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:21.760  [2024-12-16 11:31:47.549509] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:21.760  [2024-12-16 11:31:47.552399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:21.760  [2024-12-16 11:31:47.552461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:21.760  [2024-12-16 11:31:47.552510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:21.760  [2024-12-16 11:31:47.552521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:10:21.760  {
00:10:21.760    "results": [
00:10:21.760      {
00:10:21.760        "job": "raid_bdev1",
00:10:21.760        "core_mask": "0x1",
00:10:21.760        "workload": "randrw",
00:10:21.760        "percentage": 50,
00:10:21.760        "status": "finished",
00:10:21.760        "queue_depth": 1,
00:10:21.760        "io_size": 131072,
00:10:21.760        "runtime": 1.401073,
00:10:21.760        "iops": 15823.586636813357,
00:10:21.760        "mibps": 1977.9483296016697,
00:10:21.760        "io_failed": 1,
00:10:21.760        "io_timeout": 0,
00:10:21.760        "avg_latency_us": 87.47444718591638,
00:10:21.760        "min_latency_us": 26.494323144104804,
00:10:21.760        "max_latency_us": 1674.172925764192
00:10:21.760      }
00:10:21.760    ],
00:10:21.760    "core_count": 1
00:10:21.760  }
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78498
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78498 ']'
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78498
00:10:21.760    11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:21.760    11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78498
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:21.760  killing process with pid 78498
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78498'
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78498
00:10:21.760  [2024-12-16 11:31:47.599905] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:21.760   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78498
00:10:21.760  [2024-12-16 11:31:47.627140] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:22.019    11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZiEnrwP8yI
00:10:22.019    11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:10:22.019    11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:10:22.019   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71
00:10:22.019   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat
00:10:22.019   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:22.019   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:10:22.019   11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]]
00:10:22.019  
00:10:22.019  real	0m3.419s
00:10:22.019  user	0m4.394s
00:10:22.019  sys	0m0.568s
00:10:22.019   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:22.019   11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:22.019  ************************************
00:10:22.019  END TEST raid_read_error_test
00:10:22.019  ************************************
00:10:22.019   11:31:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write
00:10:22.019   11:31:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:22.019   11:31:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:22.019   11:31:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:22.019  ************************************
00:10:22.019  START TEST raid_write_error_test
00:10:22.019  ************************************
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']'
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:10:22.019    11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IN3iYQz2yd
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78627
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78627
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78627 ']'
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:22.019  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:22.019   11:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:22.020  [2024-12-16 11:31:48.065733] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:22.020  [2024-12-16 11:31:48.065891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78627 ]
00:10:22.278  [2024-12-16 11:31:48.234917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:22.278  [2024-12-16 11:31:48.287711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:22.278  [2024-12-16 11:31:48.330173] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:22.278  [2024-12-16 11:31:48.330218] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:22.848   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:22.848   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:10:22.848   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:22.848   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:10:22.848   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:22.848   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  BaseBdev1_malloc
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  true
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  [2024-12-16 11:31:48.944583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:10:23.108  [2024-12-16 11:31:48.944639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:23.108  [2024-12-16 11:31:48.944662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:10:23.108  [2024-12-16 11:31:48.944671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:23.108  [2024-12-16 11:31:48.946835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:23.108  [2024-12-16 11:31:48.946869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:10:23.108  BaseBdev1
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  BaseBdev2_malloc
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  true
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  [2024-12-16 11:31:48.997032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:10:23.108  [2024-12-16 11:31:48.997090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:23.108  [2024-12-16 11:31:48.997110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:10:23.108  [2024-12-16 11:31:48.997120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:23.108  [2024-12-16 11:31:48.999186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:23.108  [2024-12-16 11:31:48.999221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:10:23.108  BaseBdev2
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  BaseBdev3_malloc
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  true
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  [2024-12-16 11:31:49.037908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:10:23.108  [2024-12-16 11:31:49.037957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:23.108  [2024-12-16 11:31:49.037975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:10:23.108  [2024-12-16 11:31:49.037984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:23.108  [2024-12-16 11:31:49.040023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:23.108  [2024-12-16 11:31:49.040058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:10:23.108  BaseBdev3
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108  [2024-12-16 11:31:49.049953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:23.108  [2024-12-16 11:31:49.051775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:23.108  [2024-12-16 11:31:49.051869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:23.108  [2024-12-16 11:31:49.052064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:23.108  [2024-12-16 11:31:49.052088] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:10:23.108  [2024-12-16 11:31:49.052381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:23.108  [2024-12-16 11:31:49.052552] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:23.108  [2024-12-16 11:31:49.052570] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:10:23.108  [2024-12-16 11:31:49.052739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:23.108    11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:23.108    11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:23.108    11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:23.108    11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.108    11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:23.108    "name": "raid_bdev1",
00:10:23.108    "uuid": "92c5e81f-35c7-463d-a63c-4fcc151d8f7f",
00:10:23.108    "strip_size_kb": 64,
00:10:23.108    "state": "online",
00:10:23.108    "raid_level": "concat",
00:10:23.108    "superblock": true,
00:10:23.108    "num_base_bdevs": 3,
00:10:23.108    "num_base_bdevs_discovered": 3,
00:10:23.108    "num_base_bdevs_operational": 3,
00:10:23.108    "base_bdevs_list": [
00:10:23.108      {
00:10:23.108        "name": "BaseBdev1",
00:10:23.108        "uuid": "dfe02067-c430-54da-82f4-6105702316bd",
00:10:23.108        "is_configured": true,
00:10:23.108        "data_offset": 2048,
00:10:23.108        "data_size": 63488
00:10:23.108      },
00:10:23.108      {
00:10:23.108        "name": "BaseBdev2",
00:10:23.108        "uuid": "502b56a3-58bb-51d5-8c3d-5d461f3614c7",
00:10:23.108        "is_configured": true,
00:10:23.108        "data_offset": 2048,
00:10:23.108        "data_size": 63488
00:10:23.108      },
00:10:23.108      {
00:10:23.108        "name": "BaseBdev3",
00:10:23.108        "uuid": "2f592914-eb6b-5982-b50b-c9c7d36899a2",
00:10:23.108        "is_configured": true,
00:10:23.108        "data_offset": 2048,
00:10:23.108        "data_size": 63488
00:10:23.108      }
00:10:23.108    ]
00:10:23.108  }'
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:23.108   11:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:23.678   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:10:23.678   11:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:10:23.678  [2024-12-16 11:31:49.597479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:10:24.615   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:10:24.615   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:24.615   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:24.615   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:24.615   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]]
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:24.616    11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:24.616    11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:24.616    11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:24.616    11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:24.616    11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:24.616    "name": "raid_bdev1",
00:10:24.616    "uuid": "92c5e81f-35c7-463d-a63c-4fcc151d8f7f",
00:10:24.616    "strip_size_kb": 64,
00:10:24.616    "state": "online",
00:10:24.616    "raid_level": "concat",
00:10:24.616    "superblock": true,
00:10:24.616    "num_base_bdevs": 3,
00:10:24.616    "num_base_bdevs_discovered": 3,
00:10:24.616    "num_base_bdevs_operational": 3,
00:10:24.616    "base_bdevs_list": [
00:10:24.616      {
00:10:24.616        "name": "BaseBdev1",
00:10:24.616        "uuid": "dfe02067-c430-54da-82f4-6105702316bd",
00:10:24.616        "is_configured": true,
00:10:24.616        "data_offset": 2048,
00:10:24.616        "data_size": 63488
00:10:24.616      },
00:10:24.616      {
00:10:24.616        "name": "BaseBdev2",
00:10:24.616        "uuid": "502b56a3-58bb-51d5-8c3d-5d461f3614c7",
00:10:24.616        "is_configured": true,
00:10:24.616        "data_offset": 2048,
00:10:24.616        "data_size": 63488
00:10:24.616      },
00:10:24.616      {
00:10:24.616        "name": "BaseBdev3",
00:10:24.616        "uuid": "2f592914-eb6b-5982-b50b-c9c7d36899a2",
00:10:24.616        "is_configured": true,
00:10:24.616        "data_offset": 2048,
00:10:24.616        "data_size": 63488
00:10:24.616      }
00:10:24.616    ]
00:10:24.616  }'
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:24.616   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:25.196   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:25.196   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:25.196   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:25.196  [2024-12-16 11:31:50.986126] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:25.196  [2024-12-16 11:31:50.986166] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:25.196  [2024-12-16 11:31:50.988701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:25.196  [2024-12-16 11:31:50.988757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:25.196  [2024-12-16 11:31:50.988792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:25.196  [2024-12-16 11:31:50.988816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:10:25.196   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:25.196   11:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78627
00:10:25.196  {
00:10:25.196    "results": [
00:10:25.196      {
00:10:25.196        "job": "raid_bdev1",
00:10:25.196        "core_mask": "0x1",
00:10:25.196        "workload": "randrw",
00:10:25.196        "percentage": 50,
00:10:25.196        "status": "finished",
00:10:25.196        "queue_depth": 1,
00:10:25.196        "io_size": 131072,
00:10:25.196        "runtime": 1.389293,
00:10:25.196        "iops": 15354.572433604719,
00:10:25.196        "mibps": 1919.3215542005898,
00:10:25.196        "io_failed": 1,
00:10:25.196        "io_timeout": 0,
00:10:25.196        "avg_latency_us": 90.16293456004465,
00:10:25.196        "min_latency_us": 26.382532751091702,
00:10:25.196        "max_latency_us": 1430.9170305676855
00:10:25.196      }
00:10:25.196    ],
00:10:25.196    "core_count": 1
00:10:25.196  }
00:10:25.196   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78627 ']'
00:10:25.196   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78627
00:10:25.196    11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:10:25.197   11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:25.197    11:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78627
00:10:25.197   11:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:25.197   11:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:25.197  killing process with pid 78627
00:10:25.197   11:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78627'
00:10:25.197   11:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78627
00:10:25.197   11:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78627
00:10:25.197  [2024-12-16 11:31:51.013423] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:25.197  [2024-12-16 11:31:51.039990] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:25.456    11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:10:25.456    11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IN3iYQz2yd
00:10:25.456    11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:10:25.456   11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72
00:10:25.456   11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat
00:10:25.456   11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:25.456   11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:10:25.456   11:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]]
00:10:25.456  
00:10:25.456  real	0m3.342s
00:10:25.456  user	0m4.227s
00:10:25.456  sys	0m0.565s
00:10:25.456   11:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:25.456   11:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:25.456  ************************************
00:10:25.456  END TEST raid_write_error_test
00:10:25.456  ************************************
00:10:25.456   11:31:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:10:25.456   11:31:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false
00:10:25.456   11:31:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:25.456   11:31:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:25.456   11:31:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:25.456  ************************************
00:10:25.456  START TEST raid_state_function_test
00:10:25.456  ************************************
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:25.456    11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78754
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:10:25.456  Process raid pid: 78754
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78754'
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78754
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78754 ']'
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:25.456  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:25.456   11:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:25.456  [2024-12-16 11:31:51.475403] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:25.457  [2024-12-16 11:31:51.475570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:25.716  [2024-12-16 11:31:51.643526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:25.716  [2024-12-16 11:31:51.692545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:25.716  [2024-12-16 11:31:51.735436] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:25.716  [2024-12-16 11:31:51.735475] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:26.284   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:26.284   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:10:26.284   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:26.284   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:26.284   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:26.284  [2024-12-16 11:31:52.345337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:26.284  [2024-12-16 11:31:52.345395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:26.284  [2024-12-16 11:31:52.345408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:26.284  [2024-12-16 11:31:52.345419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:26.284  [2024-12-16 11:31:52.345427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:26.284  [2024-12-16 11:31:52.345439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:26.544    11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:26.544    11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:26.544    11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:26.544    11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:26.544    11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:26.544    "name": "Existed_Raid",
00:10:26.544    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:26.544    "strip_size_kb": 0,
00:10:26.544    "state": "configuring",
00:10:26.544    "raid_level": "raid1",
00:10:26.544    "superblock": false,
00:10:26.544    "num_base_bdevs": 3,
00:10:26.544    "num_base_bdevs_discovered": 0,
00:10:26.544    "num_base_bdevs_operational": 3,
00:10:26.544    "base_bdevs_list": [
00:10:26.544      {
00:10:26.544        "name": "BaseBdev1",
00:10:26.544        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:26.544        "is_configured": false,
00:10:26.544        "data_offset": 0,
00:10:26.544        "data_size": 0
00:10:26.544      },
00:10:26.544      {
00:10:26.544        "name": "BaseBdev2",
00:10:26.544        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:26.544        "is_configured": false,
00:10:26.544        "data_offset": 0,
00:10:26.544        "data_size": 0
00:10:26.544      },
00:10:26.544      {
00:10:26.544        "name": "BaseBdev3",
00:10:26.544        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:26.544        "is_configured": false,
00:10:26.544        "data_offset": 0,
00:10:26.544        "data_size": 0
00:10:26.544      }
00:10:26.544    ]
00:10:26.544  }'
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:26.544   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:26.803   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:26.803   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:26.803   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:26.803  [2024-12-16 11:31:52.832398] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:26.803  [2024-12-16 11:31:52.832500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:10:26.803   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:26.803   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:26.803   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:26.803   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:26.803  [2024-12-16 11:31:52.840410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:26.804  [2024-12-16 11:31:52.840460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:26.804  [2024-12-16 11:31:52.840470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:26.804  [2024-12-16 11:31:52.840481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:26.804  [2024-12-16 11:31:52.840489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:26.804  [2024-12-16 11:31:52.840500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:26.804  [2024-12-16 11:31:52.857759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:26.804  BaseBdev1
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:26.804   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.064  [
00:10:27.064  {
00:10:27.064  "name": "BaseBdev1",
00:10:27.064  "aliases": [
00:10:27.064  "e4d9b54a-779f-4c20-92a0-790ea5d154b8"
00:10:27.064  ],
00:10:27.064  "product_name": "Malloc disk",
00:10:27.064  "block_size": 512,
00:10:27.064  "num_blocks": 65536,
00:10:27.064  "uuid": "e4d9b54a-779f-4c20-92a0-790ea5d154b8",
00:10:27.064  "assigned_rate_limits": {
00:10:27.064  "rw_ios_per_sec": 0,
00:10:27.064  "rw_mbytes_per_sec": 0,
00:10:27.064  "r_mbytes_per_sec": 0,
00:10:27.064  "w_mbytes_per_sec": 0
00:10:27.064  },
00:10:27.064  "claimed": true,
00:10:27.064  "claim_type": "exclusive_write",
00:10:27.064  "zoned": false,
00:10:27.064  "supported_io_types": {
00:10:27.064  "read": true,
00:10:27.064  "write": true,
00:10:27.064  "unmap": true,
00:10:27.064  "flush": true,
00:10:27.064  "reset": true,
00:10:27.064  "nvme_admin": false,
00:10:27.064  "nvme_io": false,
00:10:27.064  "nvme_io_md": false,
00:10:27.064  "write_zeroes": true,
00:10:27.064  "zcopy": true,
00:10:27.064  "get_zone_info": false,
00:10:27.064  "zone_management": false,
00:10:27.064  "zone_append": false,
00:10:27.064  "compare": false,
00:10:27.064  "compare_and_write": false,
00:10:27.064  "abort": true,
00:10:27.064  "seek_hole": false,
00:10:27.064  "seek_data": false,
00:10:27.064  "copy": true,
00:10:27.064  "nvme_iov_md": false
00:10:27.064  },
00:10:27.064  "memory_domains": [
00:10:27.064  {
00:10:27.064  "dma_device_id": "system",
00:10:27.064  "dma_device_type": 1
00:10:27.064  },
00:10:27.064  {
00:10:27.064  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:27.064  "dma_device_type": 2
00:10:27.064  }
00:10:27.064  ],
00:10:27.064  "driver_specific": {}
00:10:27.064  }
00:10:27.064  ]
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:27.064    11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:27.064    11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.064    11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:27.064    11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.064    11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:27.064    "name": "Existed_Raid",
00:10:27.064    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.064    "strip_size_kb": 0,
00:10:27.064    "state": "configuring",
00:10:27.064    "raid_level": "raid1",
00:10:27.064    "superblock": false,
00:10:27.064    "num_base_bdevs": 3,
00:10:27.064    "num_base_bdevs_discovered": 1,
00:10:27.064    "num_base_bdevs_operational": 3,
00:10:27.064    "base_bdevs_list": [
00:10:27.064      {
00:10:27.064        "name": "BaseBdev1",
00:10:27.064        "uuid": "e4d9b54a-779f-4c20-92a0-790ea5d154b8",
00:10:27.064        "is_configured": true,
00:10:27.064        "data_offset": 0,
00:10:27.064        "data_size": 65536
00:10:27.064      },
00:10:27.064      {
00:10:27.064        "name": "BaseBdev2",
00:10:27.064        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.064        "is_configured": false,
00:10:27.064        "data_offset": 0,
00:10:27.064        "data_size": 0
00:10:27.064      },
00:10:27.064      {
00:10:27.064        "name": "BaseBdev3",
00:10:27.064        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.064        "is_configured": false,
00:10:27.064        "data_offset": 0,
00:10:27.064        "data_size": 0
00:10:27.064      }
00:10:27.064    ]
00:10:27.064  }'
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:27.064   11:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.324  [2024-12-16 11:31:53.305104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:27.324  [2024-12-16 11:31:53.305176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.324  [2024-12-16 11:31:53.317150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:27.324  [2024-12-16 11:31:53.319287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:27.324  [2024-12-16 11:31:53.319332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:27.324  [2024-12-16 11:31:53.319343] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:27.324  [2024-12-16 11:31:53.319355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:27.324    11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:27.324    11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.324    11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.324    11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:27.324    11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:27.324    "name": "Existed_Raid",
00:10:27.324    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.324    "strip_size_kb": 0,
00:10:27.324    "state": "configuring",
00:10:27.324    "raid_level": "raid1",
00:10:27.324    "superblock": false,
00:10:27.324    "num_base_bdevs": 3,
00:10:27.324    "num_base_bdevs_discovered": 1,
00:10:27.324    "num_base_bdevs_operational": 3,
00:10:27.324    "base_bdevs_list": [
00:10:27.324      {
00:10:27.324        "name": "BaseBdev1",
00:10:27.324        "uuid": "e4d9b54a-779f-4c20-92a0-790ea5d154b8",
00:10:27.324        "is_configured": true,
00:10:27.324        "data_offset": 0,
00:10:27.324        "data_size": 65536
00:10:27.324      },
00:10:27.324      {
00:10:27.324        "name": "BaseBdev2",
00:10:27.324        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.324        "is_configured": false,
00:10:27.324        "data_offset": 0,
00:10:27.324        "data_size": 0
00:10:27.324      },
00:10:27.324      {
00:10:27.324        "name": "BaseBdev3",
00:10:27.324        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.324        "is_configured": false,
00:10:27.324        "data_offset": 0,
00:10:27.324        "data_size": 0
00:10:27.324      }
00:10:27.324    ]
00:10:27.324  }'
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:27.324   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.894  [2024-12-16 11:31:53.790967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:27.894  BaseBdev2
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.894  [
00:10:27.894  {
00:10:27.894  "name": "BaseBdev2",
00:10:27.894  "aliases": [
00:10:27.894  "a975dc12-08cc-4f1d-ac9c-48974c2022da"
00:10:27.894  ],
00:10:27.894  "product_name": "Malloc disk",
00:10:27.894  "block_size": 512,
00:10:27.894  "num_blocks": 65536,
00:10:27.894  "uuid": "a975dc12-08cc-4f1d-ac9c-48974c2022da",
00:10:27.894  "assigned_rate_limits": {
00:10:27.894  "rw_ios_per_sec": 0,
00:10:27.894  "rw_mbytes_per_sec": 0,
00:10:27.894  "r_mbytes_per_sec": 0,
00:10:27.894  "w_mbytes_per_sec": 0
00:10:27.894  },
00:10:27.894  "claimed": true,
00:10:27.894  "claim_type": "exclusive_write",
00:10:27.894  "zoned": false,
00:10:27.894  "supported_io_types": {
00:10:27.894  "read": true,
00:10:27.894  "write": true,
00:10:27.894  "unmap": true,
00:10:27.894  "flush": true,
00:10:27.894  "reset": true,
00:10:27.894  "nvme_admin": false,
00:10:27.894  "nvme_io": false,
00:10:27.894  "nvme_io_md": false,
00:10:27.894  "write_zeroes": true,
00:10:27.894  "zcopy": true,
00:10:27.894  "get_zone_info": false,
00:10:27.894  "zone_management": false,
00:10:27.894  "zone_append": false,
00:10:27.894  "compare": false,
00:10:27.894  "compare_and_write": false,
00:10:27.894  "abort": true,
00:10:27.894  "seek_hole": false,
00:10:27.894  "seek_data": false,
00:10:27.894  "copy": true,
00:10:27.894  "nvme_iov_md": false
00:10:27.894  },
00:10:27.894  "memory_domains": [
00:10:27.894  {
00:10:27.894  "dma_device_id": "system",
00:10:27.894  "dma_device_type": 1
00:10:27.894  },
00:10:27.894  {
00:10:27.894  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:27.894  "dma_device_type": 2
00:10:27.894  }
00:10:27.894  ],
00:10:27.894  "driver_specific": {}
00:10:27.894  }
00:10:27.894  ]
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:27.894   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:27.895   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:27.895   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:27.895   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:27.895    11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:27.895    11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:27.895    11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:27.895    11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:27.895    11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:27.895   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:27.895    "name": "Existed_Raid",
00:10:27.895    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.895    "strip_size_kb": 0,
00:10:27.895    "state": "configuring",
00:10:27.895    "raid_level": "raid1",
00:10:27.895    "superblock": false,
00:10:27.895    "num_base_bdevs": 3,
00:10:27.895    "num_base_bdevs_discovered": 2,
00:10:27.895    "num_base_bdevs_operational": 3,
00:10:27.895    "base_bdevs_list": [
00:10:27.895      {
00:10:27.895        "name": "BaseBdev1",
00:10:27.895        "uuid": "e4d9b54a-779f-4c20-92a0-790ea5d154b8",
00:10:27.895        "is_configured": true,
00:10:27.895        "data_offset": 0,
00:10:27.895        "data_size": 65536
00:10:27.895      },
00:10:27.895      {
00:10:27.895        "name": "BaseBdev2",
00:10:27.895        "uuid": "a975dc12-08cc-4f1d-ac9c-48974c2022da",
00:10:27.895        "is_configured": true,
00:10:27.895        "data_offset": 0,
00:10:27.895        "data_size": 65536
00:10:27.895      },
00:10:27.895      {
00:10:27.895        "name": "BaseBdev3",
00:10:27.895        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:27.895        "is_configured": false,
00:10:27.895        "data_offset": 0,
00:10:27.895        "data_size": 0
00:10:27.895      }
00:10:27.895    ]
00:10:27.895  }'
00:10:27.895   11:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:27.895   11:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.464  [2024-12-16 11:31:54.297370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:28.464  [2024-12-16 11:31:54.297425] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:10:28.464  [2024-12-16 11:31:54.297436] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:10:28.464  [2024-12-16 11:31:54.297762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:28.464  [2024-12-16 11:31:54.297931] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:10:28.464  [2024-12-16 11:31:54.297949] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:10:28.464  [2024-12-16 11:31:54.298157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:28.464  BaseBdev3
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.464   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.464  [
00:10:28.464  {
00:10:28.464  "name": "BaseBdev3",
00:10:28.464  "aliases": [
00:10:28.464  "0101fea4-7fb2-44ad-a26a-37d6039c6025"
00:10:28.464  ],
00:10:28.464  "product_name": "Malloc disk",
00:10:28.464  "block_size": 512,
00:10:28.464  "num_blocks": 65536,
00:10:28.464  "uuid": "0101fea4-7fb2-44ad-a26a-37d6039c6025",
00:10:28.464  "assigned_rate_limits": {
00:10:28.464  "rw_ios_per_sec": 0,
00:10:28.464  "rw_mbytes_per_sec": 0,
00:10:28.464  "r_mbytes_per_sec": 0,
00:10:28.464  "w_mbytes_per_sec": 0
00:10:28.464  },
00:10:28.464  "claimed": true,
00:10:28.464  "claim_type": "exclusive_write",
00:10:28.464  "zoned": false,
00:10:28.464  "supported_io_types": {
00:10:28.464  "read": true,
00:10:28.464  "write": true,
00:10:28.464  "unmap": true,
00:10:28.464  "flush": true,
00:10:28.464  "reset": true,
00:10:28.464  "nvme_admin": false,
00:10:28.464  "nvme_io": false,
00:10:28.464  "nvme_io_md": false,
00:10:28.464  "write_zeroes": true,
00:10:28.464  "zcopy": true,
00:10:28.464  "get_zone_info": false,
00:10:28.464  "zone_management": false,
00:10:28.464  "zone_append": false,
00:10:28.464  "compare": false,
00:10:28.464  "compare_and_write": false,
00:10:28.464  "abort": true,
00:10:28.464  "seek_hole": false,
00:10:28.464  "seek_data": false,
00:10:28.464  "copy": true,
00:10:28.464  "nvme_iov_md": false
00:10:28.464  },
00:10:28.464  "memory_domains": [
00:10:28.464  {
00:10:28.464  "dma_device_id": "system",
00:10:28.464  "dma_device_type": 1
00:10:28.464  },
00:10:28.464  {
00:10:28.465  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:28.465  "dma_device_type": 2
00:10:28.465  }
00:10:28.465  ],
00:10:28.465  "driver_specific": {}
00:10:28.465  }
00:10:28.465  ]
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:28.465    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:28.465    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:28.465    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.465    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.465    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:28.465    "name": "Existed_Raid",
00:10:28.465    "uuid": "93cdc49b-7957-4a25-8146-1f63b144574f",
00:10:28.465    "strip_size_kb": 0,
00:10:28.465    "state": "online",
00:10:28.465    "raid_level": "raid1",
00:10:28.465    "superblock": false,
00:10:28.465    "num_base_bdevs": 3,
00:10:28.465    "num_base_bdevs_discovered": 3,
00:10:28.465    "num_base_bdevs_operational": 3,
00:10:28.465    "base_bdevs_list": [
00:10:28.465      {
00:10:28.465        "name": "BaseBdev1",
00:10:28.465        "uuid": "e4d9b54a-779f-4c20-92a0-790ea5d154b8",
00:10:28.465        "is_configured": true,
00:10:28.465        "data_offset": 0,
00:10:28.465        "data_size": 65536
00:10:28.465      },
00:10:28.465      {
00:10:28.465        "name": "BaseBdev2",
00:10:28.465        "uuid": "a975dc12-08cc-4f1d-ac9c-48974c2022da",
00:10:28.465        "is_configured": true,
00:10:28.465        "data_offset": 0,
00:10:28.465        "data_size": 65536
00:10:28.465      },
00:10:28.465      {
00:10:28.465        "name": "BaseBdev3",
00:10:28.465        "uuid": "0101fea4-7fb2-44ad-a26a-37d6039c6025",
00:10:28.465        "is_configured": true,
00:10:28.465        "data_offset": 0,
00:10:28.465        "data_size": 65536
00:10:28.465      }
00:10:28.465    ]
00:10:28.465  }'
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:28.465   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.725   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:10:28.725   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:10:28.725   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:28.725   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:28.725   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:10:28.725   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:28.725    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:28.725    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:10:28.725    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.725    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.725  [2024-12-16 11:31:54.756994] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:28.725    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.725   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:28.725    "name": "Existed_Raid",
00:10:28.725    "aliases": [
00:10:28.725      "93cdc49b-7957-4a25-8146-1f63b144574f"
00:10:28.725    ],
00:10:28.725    "product_name": "Raid Volume",
00:10:28.725    "block_size": 512,
00:10:28.725    "num_blocks": 65536,
00:10:28.725    "uuid": "93cdc49b-7957-4a25-8146-1f63b144574f",
00:10:28.725    "assigned_rate_limits": {
00:10:28.725      "rw_ios_per_sec": 0,
00:10:28.725      "rw_mbytes_per_sec": 0,
00:10:28.725      "r_mbytes_per_sec": 0,
00:10:28.725      "w_mbytes_per_sec": 0
00:10:28.725    },
00:10:28.725    "claimed": false,
00:10:28.725    "zoned": false,
00:10:28.725    "supported_io_types": {
00:10:28.725      "read": true,
00:10:28.725      "write": true,
00:10:28.725      "unmap": false,
00:10:28.725      "flush": false,
00:10:28.725      "reset": true,
00:10:28.725      "nvme_admin": false,
00:10:28.725      "nvme_io": false,
00:10:28.725      "nvme_io_md": false,
00:10:28.725      "write_zeroes": true,
00:10:28.725      "zcopy": false,
00:10:28.725      "get_zone_info": false,
00:10:28.725      "zone_management": false,
00:10:28.725      "zone_append": false,
00:10:28.725      "compare": false,
00:10:28.725      "compare_and_write": false,
00:10:28.725      "abort": false,
00:10:28.725      "seek_hole": false,
00:10:28.725      "seek_data": false,
00:10:28.725      "copy": false,
00:10:28.725      "nvme_iov_md": false
00:10:28.725    },
00:10:28.725    "memory_domains": [
00:10:28.725      {
00:10:28.725        "dma_device_id": "system",
00:10:28.725        "dma_device_type": 1
00:10:28.725      },
00:10:28.725      {
00:10:28.725        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:28.725        "dma_device_type": 2
00:10:28.725      },
00:10:28.725      {
00:10:28.725        "dma_device_id": "system",
00:10:28.725        "dma_device_type": 1
00:10:28.725      },
00:10:28.725      {
00:10:28.725        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:28.725        "dma_device_type": 2
00:10:28.725      },
00:10:28.725      {
00:10:28.725        "dma_device_id": "system",
00:10:28.725        "dma_device_type": 1
00:10:28.725      },
00:10:28.725      {
00:10:28.725        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:28.725        "dma_device_type": 2
00:10:28.725      }
00:10:28.725    ],
00:10:28.725    "driver_specific": {
00:10:28.725      "raid": {
00:10:28.725        "uuid": "93cdc49b-7957-4a25-8146-1f63b144574f",
00:10:28.725        "strip_size_kb": 0,
00:10:28.725        "state": "online",
00:10:28.725        "raid_level": "raid1",
00:10:28.725        "superblock": false,
00:10:28.725        "num_base_bdevs": 3,
00:10:28.725        "num_base_bdevs_discovered": 3,
00:10:28.725        "num_base_bdevs_operational": 3,
00:10:28.725        "base_bdevs_list": [
00:10:28.725          {
00:10:28.725            "name": "BaseBdev1",
00:10:28.725            "uuid": "e4d9b54a-779f-4c20-92a0-790ea5d154b8",
00:10:28.725            "is_configured": true,
00:10:28.725            "data_offset": 0,
00:10:28.725            "data_size": 65536
00:10:28.725          },
00:10:28.725          {
00:10:28.725            "name": "BaseBdev2",
00:10:28.725            "uuid": "a975dc12-08cc-4f1d-ac9c-48974c2022da",
00:10:28.725            "is_configured": true,
00:10:28.725            "data_offset": 0,
00:10:28.725            "data_size": 65536
00:10:28.725          },
00:10:28.725          {
00:10:28.725            "name": "BaseBdev3",
00:10:28.725            "uuid": "0101fea4-7fb2-44ad-a26a-37d6039c6025",
00:10:28.725            "is_configured": true,
00:10:28.725            "data_offset": 0,
00:10:28.725            "data_size": 65536
00:10:28.725          }
00:10:28.725        ]
00:10:28.725      }
00:10:28.725    }
00:10:28.725  }'
00:10:28.725    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:10:28.985  BaseBdev2
00:10:28.985  BaseBdev3'
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.985  [2024-12-16 11:31:54.976366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:28.985   11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:28.985    11:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:28.985    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:28.985   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:28.985    "name": "Existed_Raid",
00:10:28.985    "uuid": "93cdc49b-7957-4a25-8146-1f63b144574f",
00:10:28.985    "strip_size_kb": 0,
00:10:28.985    "state": "online",
00:10:28.985    "raid_level": "raid1",
00:10:28.985    "superblock": false,
00:10:28.985    "num_base_bdevs": 3,
00:10:28.985    "num_base_bdevs_discovered": 2,
00:10:28.985    "num_base_bdevs_operational": 2,
00:10:28.985    "base_bdevs_list": [
00:10:28.985      {
00:10:28.985        "name": null,
00:10:28.985        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:28.985        "is_configured": false,
00:10:28.985        "data_offset": 0,
00:10:28.985        "data_size": 65536
00:10:28.985      },
00:10:28.985      {
00:10:28.985        "name": "BaseBdev2",
00:10:28.985        "uuid": "a975dc12-08cc-4f1d-ac9c-48974c2022da",
00:10:28.985        "is_configured": true,
00:10:28.985        "data_offset": 0,
00:10:28.985        "data_size": 65536
00:10:28.985      },
00:10:28.985      {
00:10:28.985        "name": "BaseBdev3",
00:10:28.985        "uuid": "0101fea4-7fb2-44ad-a26a-37d6039c6025",
00:10:28.986        "is_configured": true,
00:10:28.986        "data_offset": 0,
00:10:28.986        "data_size": 65536
00:10:28.986      }
00:10:28.986    ]
00:10:28.986  }'
00:10:28.986   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:28.986   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.564   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.565  [2024-12-16 11:31:55.475049] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.565  [2024-12-16 11:31:55.526753] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:10:29.565  [2024-12-16 11:31:55.526862] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:29.565  [2024-12-16 11:31:55.538905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:29.565  [2024-12-16 11:31:55.538957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:29.565  [2024-12-16 11:31:55.538973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.565    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:29.565   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.566  BaseBdev2
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.566   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.566  [
00:10:29.566  {
00:10:29.566  "name": "BaseBdev2",
00:10:29.566  "aliases": [
00:10:29.566  "addbe2a3-7265-4117-a9c6-f72ba93faf9c"
00:10:29.566  ],
00:10:29.826  "product_name": "Malloc disk",
00:10:29.826  "block_size": 512,
00:10:29.826  "num_blocks": 65536,
00:10:29.826  "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:29.826  "assigned_rate_limits": {
00:10:29.826  "rw_ios_per_sec": 0,
00:10:29.826  "rw_mbytes_per_sec": 0,
00:10:29.826  "r_mbytes_per_sec": 0,
00:10:29.826  "w_mbytes_per_sec": 0
00:10:29.826  },
00:10:29.826  "claimed": false,
00:10:29.826  "zoned": false,
00:10:29.826  "supported_io_types": {
00:10:29.826  "read": true,
00:10:29.826  "write": true,
00:10:29.826  "unmap": true,
00:10:29.826  "flush": true,
00:10:29.826  "reset": true,
00:10:29.826  "nvme_admin": false,
00:10:29.826  "nvme_io": false,
00:10:29.826  "nvme_io_md": false,
00:10:29.826  "write_zeroes": true,
00:10:29.826  "zcopy": true,
00:10:29.826  "get_zone_info": false,
00:10:29.826  "zone_management": false,
00:10:29.826  "zone_append": false,
00:10:29.826  "compare": false,
00:10:29.826  "compare_and_write": false,
00:10:29.826  "abort": true,
00:10:29.826  "seek_hole": false,
00:10:29.826  "seek_data": false,
00:10:29.826  "copy": true,
00:10:29.826  "nvme_iov_md": false
00:10:29.826  },
00:10:29.826  "memory_domains": [
00:10:29.826  {
00:10:29.826  "dma_device_id": "system",
00:10:29.826  "dma_device_type": 1
00:10:29.826  },
00:10:29.826  {
00:10:29.826  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:29.826  "dma_device_type": 2
00:10:29.826  }
00:10:29.826  ],
00:10:29.826  "driver_specific": {}
00:10:29.826  }
00:10:29.826  ]
00:10:29.826   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.826   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:29.826   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:10:29.826   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:29.826   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:10:29.826   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.826   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.826  BaseBdev3
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.827  [
00:10:29.827  {
00:10:29.827  "name": "BaseBdev3",
00:10:29.827  "aliases": [
00:10:29.827  "0db4c42c-382d-4a39-90a9-c346a3f9fe7a"
00:10:29.827  ],
00:10:29.827  "product_name": "Malloc disk",
00:10:29.827  "block_size": 512,
00:10:29.827  "num_blocks": 65536,
00:10:29.827  "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:29.827  "assigned_rate_limits": {
00:10:29.827  "rw_ios_per_sec": 0,
00:10:29.827  "rw_mbytes_per_sec": 0,
00:10:29.827  "r_mbytes_per_sec": 0,
00:10:29.827  "w_mbytes_per_sec": 0
00:10:29.827  },
00:10:29.827  "claimed": false,
00:10:29.827  "zoned": false,
00:10:29.827  "supported_io_types": {
00:10:29.827  "read": true,
00:10:29.827  "write": true,
00:10:29.827  "unmap": true,
00:10:29.827  "flush": true,
00:10:29.827  "reset": true,
00:10:29.827  "nvme_admin": false,
00:10:29.827  "nvme_io": false,
00:10:29.827  "nvme_io_md": false,
00:10:29.827  "write_zeroes": true,
00:10:29.827  "zcopy": true,
00:10:29.827  "get_zone_info": false,
00:10:29.827  "zone_management": false,
00:10:29.827  "zone_append": false,
00:10:29.827  "compare": false,
00:10:29.827  "compare_and_write": false,
00:10:29.827  "abort": true,
00:10:29.827  "seek_hole": false,
00:10:29.827  "seek_data": false,
00:10:29.827  "copy": true,
00:10:29.827  "nvme_iov_md": false
00:10:29.827  },
00:10:29.827  "memory_domains": [
00:10:29.827  {
00:10:29.827  "dma_device_id": "system",
00:10:29.827  "dma_device_type": 1
00:10:29.827  },
00:10:29.827  {
00:10:29.827  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:29.827  "dma_device_type": 2
00:10:29.827  }
00:10:29.827  ],
00:10:29.827  "driver_specific": {}
00:10:29.827  }
00:10:29.827  ]
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.827  [2024-12-16 11:31:55.700570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:29.827  [2024-12-16 11:31:55.700724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:29.827  [2024-12-16 11:31:55.700768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:29.827  [2024-12-16 11:31:55.702779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:29.827    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:29.827    11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:29.827    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:29.827    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:29.827    11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:29.827    "name": "Existed_Raid",
00:10:29.827    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:29.827    "strip_size_kb": 0,
00:10:29.827    "state": "configuring",
00:10:29.827    "raid_level": "raid1",
00:10:29.827    "superblock": false,
00:10:29.827    "num_base_bdevs": 3,
00:10:29.827    "num_base_bdevs_discovered": 2,
00:10:29.827    "num_base_bdevs_operational": 3,
00:10:29.827    "base_bdevs_list": [
00:10:29.827      {
00:10:29.827        "name": "BaseBdev1",
00:10:29.827        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:29.827        "is_configured": false,
00:10:29.827        "data_offset": 0,
00:10:29.827        "data_size": 0
00:10:29.827      },
00:10:29.827      {
00:10:29.827        "name": "BaseBdev2",
00:10:29.827        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:29.827        "is_configured": true,
00:10:29.827        "data_offset": 0,
00:10:29.827        "data_size": 65536
00:10:29.827      },
00:10:29.827      {
00:10:29.827        "name": "BaseBdev3",
00:10:29.827        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:29.827        "is_configured": true,
00:10:29.827        "data_offset": 0,
00:10:29.827        "data_size": 65536
00:10:29.827      }
00:10:29.827    ]
00:10:29.827  }'
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:29.827   11:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.396  [2024-12-16 11:31:56.159775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:30.396    11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:30.396    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:30.396    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.396    11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:30.396    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:30.396   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:30.396    "name": "Existed_Raid",
00:10:30.396    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:30.396    "strip_size_kb": 0,
00:10:30.396    "state": "configuring",
00:10:30.396    "raid_level": "raid1",
00:10:30.396    "superblock": false,
00:10:30.396    "num_base_bdevs": 3,
00:10:30.396    "num_base_bdevs_discovered": 1,
00:10:30.396    "num_base_bdevs_operational": 3,
00:10:30.396    "base_bdevs_list": [
00:10:30.396      {
00:10:30.396        "name": "BaseBdev1",
00:10:30.396        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:30.396        "is_configured": false,
00:10:30.396        "data_offset": 0,
00:10:30.396        "data_size": 0
00:10:30.396      },
00:10:30.396      {
00:10:30.396        "name": null,
00:10:30.396        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:30.396        "is_configured": false,
00:10:30.396        "data_offset": 0,
00:10:30.396        "data_size": 65536
00:10:30.396      },
00:10:30.396      {
00:10:30.396        "name": "BaseBdev3",
00:10:30.396        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:30.396        "is_configured": true,
00:10:30.396        "data_offset": 0,
00:10:30.396        "data_size": 65536
00:10:30.396      }
00:10:30.396    ]
00:10:30.397  }'
00:10:30.397   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:30.397   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.656  [2024-12-16 11:31:56.665981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:30.656  BaseBdev1
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.656  [
00:10:30.656  {
00:10:30.656  "name": "BaseBdev1",
00:10:30.656  "aliases": [
00:10:30.656  "e003971d-8937-4fce-848b-c5e5f1e430fd"
00:10:30.656  ],
00:10:30.656  "product_name": "Malloc disk",
00:10:30.656  "block_size": 512,
00:10:30.656  "num_blocks": 65536,
00:10:30.656  "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:30.656  "assigned_rate_limits": {
00:10:30.656  "rw_ios_per_sec": 0,
00:10:30.656  "rw_mbytes_per_sec": 0,
00:10:30.656  "r_mbytes_per_sec": 0,
00:10:30.656  "w_mbytes_per_sec": 0
00:10:30.656  },
00:10:30.656  "claimed": true,
00:10:30.656  "claim_type": "exclusive_write",
00:10:30.656  "zoned": false,
00:10:30.656  "supported_io_types": {
00:10:30.656  "read": true,
00:10:30.656  "write": true,
00:10:30.656  "unmap": true,
00:10:30.656  "flush": true,
00:10:30.656  "reset": true,
00:10:30.656  "nvme_admin": false,
00:10:30.656  "nvme_io": false,
00:10:30.656  "nvme_io_md": false,
00:10:30.656  "write_zeroes": true,
00:10:30.656  "zcopy": true,
00:10:30.656  "get_zone_info": false,
00:10:30.656  "zone_management": false,
00:10:30.656  "zone_append": false,
00:10:30.656  "compare": false,
00:10:30.656  "compare_and_write": false,
00:10:30.656  "abort": true,
00:10:30.656  "seek_hole": false,
00:10:30.656  "seek_data": false,
00:10:30.656  "copy": true,
00:10:30.656  "nvme_iov_md": false
00:10:30.656  },
00:10:30.656  "memory_domains": [
00:10:30.656  {
00:10:30.656  "dma_device_id": "system",
00:10:30.656  "dma_device_type": 1
00:10:30.656  },
00:10:30.656  {
00:10:30.656  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:30.656  "dma_device_type": 2
00:10:30.656  }
00:10:30.656  ],
00:10:30.656  "driver_specific": {}
00:10:30.656  }
00:10:30.656  ]
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:30.656   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:30.656    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:30.916    11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:30.916   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:30.916    "name": "Existed_Raid",
00:10:30.916    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:30.916    "strip_size_kb": 0,
00:10:30.916    "state": "configuring",
00:10:30.916    "raid_level": "raid1",
00:10:30.916    "superblock": false,
00:10:30.916    "num_base_bdevs": 3,
00:10:30.916    "num_base_bdevs_discovered": 2,
00:10:30.916    "num_base_bdevs_operational": 3,
00:10:30.916    "base_bdevs_list": [
00:10:30.916      {
00:10:30.916        "name": "BaseBdev1",
00:10:30.916        "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:30.916        "is_configured": true,
00:10:30.916        "data_offset": 0,
00:10:30.916        "data_size": 65536
00:10:30.916      },
00:10:30.916      {
00:10:30.916        "name": null,
00:10:30.916        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:30.916        "is_configured": false,
00:10:30.916        "data_offset": 0,
00:10:30.916        "data_size": 65536
00:10:30.916      },
00:10:30.916      {
00:10:30.916        "name": "BaseBdev3",
00:10:30.916        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:30.916        "is_configured": true,
00:10:30.916        "data_offset": 0,
00:10:30.916        "data_size": 65536
00:10:30.916      }
00:10:30.916    ]
00:10:30.916  }'
00:10:30.916   11:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:30.916   11:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.175    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:31.175    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:31.175    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:31.175    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.175    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.175  [2024-12-16 11:31:57.201186] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:31.175   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:31.176   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:31.176    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:31.176    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:31.176    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:31.176    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.176    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:31.435   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:31.435    "name": "Existed_Raid",
00:10:31.435    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:31.435    "strip_size_kb": 0,
00:10:31.435    "state": "configuring",
00:10:31.435    "raid_level": "raid1",
00:10:31.435    "superblock": false,
00:10:31.435    "num_base_bdevs": 3,
00:10:31.435    "num_base_bdevs_discovered": 1,
00:10:31.435    "num_base_bdevs_operational": 3,
00:10:31.435    "base_bdevs_list": [
00:10:31.435      {
00:10:31.435        "name": "BaseBdev1",
00:10:31.435        "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:31.435        "is_configured": true,
00:10:31.435        "data_offset": 0,
00:10:31.435        "data_size": 65536
00:10:31.435      },
00:10:31.435      {
00:10:31.435        "name": null,
00:10:31.435        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:31.435        "is_configured": false,
00:10:31.435        "data_offset": 0,
00:10:31.435        "data_size": 65536
00:10:31.435      },
00:10:31.435      {
00:10:31.435        "name": null,
00:10:31.435        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:31.435        "is_configured": false,
00:10:31.435        "data_offset": 0,
00:10:31.435        "data_size": 65536
00:10:31.435      }
00:10:31.435    ]
00:10:31.435  }'
00:10:31.435   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:31.435   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.695    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:31.695    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:31.695    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:31.695    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.695    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.695  [2024-12-16 11:31:57.680457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:31.695   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:31.696   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:31.696   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:31.696   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:31.696    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:31.696    11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:31.696    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:31.696    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:31.696    11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:31.696   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:31.696    "name": "Existed_Raid",
00:10:31.696    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:31.696    "strip_size_kb": 0,
00:10:31.696    "state": "configuring",
00:10:31.696    "raid_level": "raid1",
00:10:31.696    "superblock": false,
00:10:31.696    "num_base_bdevs": 3,
00:10:31.696    "num_base_bdevs_discovered": 2,
00:10:31.696    "num_base_bdevs_operational": 3,
00:10:31.696    "base_bdevs_list": [
00:10:31.696      {
00:10:31.696        "name": "BaseBdev1",
00:10:31.696        "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:31.696        "is_configured": true,
00:10:31.696        "data_offset": 0,
00:10:31.696        "data_size": 65536
00:10:31.696      },
00:10:31.696      {
00:10:31.696        "name": null,
00:10:31.696        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:31.696        "is_configured": false,
00:10:31.696        "data_offset": 0,
00:10:31.696        "data_size": 65536
00:10:31.696      },
00:10:31.696      {
00:10:31.696        "name": "BaseBdev3",
00:10:31.696        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:31.696        "is_configured": true,
00:10:31.696        "data_offset": 0,
00:10:31.696        "data_size": 65536
00:10:31.696      }
00:10:31.696    ]
00:10:31.696  }'
00:10:31.696   11:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:31.696   11:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.265  [2024-12-16 11:31:58.191560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.265    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:32.265    "name": "Existed_Raid",
00:10:32.265    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:32.265    "strip_size_kb": 0,
00:10:32.265    "state": "configuring",
00:10:32.265    "raid_level": "raid1",
00:10:32.265    "superblock": false,
00:10:32.265    "num_base_bdevs": 3,
00:10:32.265    "num_base_bdevs_discovered": 1,
00:10:32.265    "num_base_bdevs_operational": 3,
00:10:32.265    "base_bdevs_list": [
00:10:32.265      {
00:10:32.265        "name": null,
00:10:32.265        "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:32.265        "is_configured": false,
00:10:32.265        "data_offset": 0,
00:10:32.265        "data_size": 65536
00:10:32.265      },
00:10:32.265      {
00:10:32.265        "name": null,
00:10:32.265        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:32.265        "is_configured": false,
00:10:32.265        "data_offset": 0,
00:10:32.265        "data_size": 65536
00:10:32.265      },
00:10:32.265      {
00:10:32.265        "name": "BaseBdev3",
00:10:32.265        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:32.265        "is_configured": true,
00:10:32.265        "data_offset": 0,
00:10:32.265        "data_size": 65536
00:10:32.265      }
00:10:32.265    ]
00:10:32.265  }'
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:32.265   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.833    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:32.833    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:32.833    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.833    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:32.833    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.833  [2024-12-16 11:31:58.701249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:32.833   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:32.834   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:32.834    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:32.834    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:32.834    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:32.834    11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:32.834    11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:32.834   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:32.834    "name": "Existed_Raid",
00:10:32.834    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:32.834    "strip_size_kb": 0,
00:10:32.834    "state": "configuring",
00:10:32.834    "raid_level": "raid1",
00:10:32.834    "superblock": false,
00:10:32.834    "num_base_bdevs": 3,
00:10:32.834    "num_base_bdevs_discovered": 2,
00:10:32.834    "num_base_bdevs_operational": 3,
00:10:32.834    "base_bdevs_list": [
00:10:32.834      {
00:10:32.834        "name": null,
00:10:32.834        "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:32.834        "is_configured": false,
00:10:32.834        "data_offset": 0,
00:10:32.834        "data_size": 65536
00:10:32.834      },
00:10:32.834      {
00:10:32.834        "name": "BaseBdev2",
00:10:32.834        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:32.834        "is_configured": true,
00:10:32.834        "data_offset": 0,
00:10:32.834        "data_size": 65536
00:10:32.834      },
00:10:32.834      {
00:10:32.834        "name": "BaseBdev3",
00:10:32.834        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:32.834        "is_configured": true,
00:10:32.834        "data_offset": 0,
00:10:32.834        "data_size": 65536
00:10:32.834      }
00:10:32.834    ]
00:10:32.834  }'
00:10:32.834   11:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:32.834   11:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e003971d-8937-4fce-848b-c5e5f1e430fd
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.403  [2024-12-16 11:31:59.263160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:10:33.403  [2024-12-16 11:31:59.263205] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:33.403  [2024-12-16 11:31:59.263213] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:10:33.403  [2024-12-16 11:31:59.263485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:10:33.403  [2024-12-16 11:31:59.263636] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:33.403  [2024-12-16 11:31:59.263650] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:10:33.403  [2024-12-16 11:31:59.263828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:33.403  NewBaseBdev
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.403  [
00:10:33.403  {
00:10:33.403  "name": "NewBaseBdev",
00:10:33.403  "aliases": [
00:10:33.403  "e003971d-8937-4fce-848b-c5e5f1e430fd"
00:10:33.403  ],
00:10:33.403  "product_name": "Malloc disk",
00:10:33.403  "block_size": 512,
00:10:33.403  "num_blocks": 65536,
00:10:33.403  "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:33.403  "assigned_rate_limits": {
00:10:33.403  "rw_ios_per_sec": 0,
00:10:33.403  "rw_mbytes_per_sec": 0,
00:10:33.403  "r_mbytes_per_sec": 0,
00:10:33.403  "w_mbytes_per_sec": 0
00:10:33.403  },
00:10:33.403  "claimed": true,
00:10:33.403  "claim_type": "exclusive_write",
00:10:33.403  "zoned": false,
00:10:33.403  "supported_io_types": {
00:10:33.403  "read": true,
00:10:33.403  "write": true,
00:10:33.403  "unmap": true,
00:10:33.403  "flush": true,
00:10:33.403  "reset": true,
00:10:33.403  "nvme_admin": false,
00:10:33.403  "nvme_io": false,
00:10:33.403  "nvme_io_md": false,
00:10:33.403  "write_zeroes": true,
00:10:33.403  "zcopy": true,
00:10:33.403  "get_zone_info": false,
00:10:33.403  "zone_management": false,
00:10:33.403  "zone_append": false,
00:10:33.403  "compare": false,
00:10:33.403  "compare_and_write": false,
00:10:33.403  "abort": true,
00:10:33.403  "seek_hole": false,
00:10:33.403  "seek_data": false,
00:10:33.403  "copy": true,
00:10:33.403  "nvme_iov_md": false
00:10:33.403  },
00:10:33.403  "memory_domains": [
00:10:33.403  {
00:10:33.403  "dma_device_id": "system",
00:10:33.403  "dma_device_type": 1
00:10:33.403  },
00:10:33.403  {
00:10:33.403  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:33.403  "dma_device_type": 2
00:10:33.403  }
00:10:33.403  ],
00:10:33.403  "driver_specific": {}
00:10:33.403  }
00:10:33.403  ]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:33.403    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.403   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:33.403    "name": "Existed_Raid",
00:10:33.404    "uuid": "0173cc29-8e82-4ff7-a00c-22cc5d415f10",
00:10:33.404    "strip_size_kb": 0,
00:10:33.404    "state": "online",
00:10:33.404    "raid_level": "raid1",
00:10:33.404    "superblock": false,
00:10:33.404    "num_base_bdevs": 3,
00:10:33.404    "num_base_bdevs_discovered": 3,
00:10:33.404    "num_base_bdevs_operational": 3,
00:10:33.404    "base_bdevs_list": [
00:10:33.404      {
00:10:33.404        "name": "NewBaseBdev",
00:10:33.404        "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:33.404        "is_configured": true,
00:10:33.404        "data_offset": 0,
00:10:33.404        "data_size": 65536
00:10:33.404      },
00:10:33.404      {
00:10:33.404        "name": "BaseBdev2",
00:10:33.404        "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:33.404        "is_configured": true,
00:10:33.404        "data_offset": 0,
00:10:33.404        "data_size": 65536
00:10:33.404      },
00:10:33.404      {
00:10:33.404        "name": "BaseBdev3",
00:10:33.404        "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:33.404        "is_configured": true,
00:10:33.404        "data_offset": 0,
00:10:33.404        "data_size": 65536
00:10:33.404      }
00:10:33.404    ]
00:10:33.404  }'
00:10:33.404   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:33.404   11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:33.973  [2024-12-16 11:31:59.766730] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:33.973    "name": "Existed_Raid",
00:10:33.973    "aliases": [
00:10:33.973      "0173cc29-8e82-4ff7-a00c-22cc5d415f10"
00:10:33.973    ],
00:10:33.973    "product_name": "Raid Volume",
00:10:33.973    "block_size": 512,
00:10:33.973    "num_blocks": 65536,
00:10:33.973    "uuid": "0173cc29-8e82-4ff7-a00c-22cc5d415f10",
00:10:33.973    "assigned_rate_limits": {
00:10:33.973      "rw_ios_per_sec": 0,
00:10:33.973      "rw_mbytes_per_sec": 0,
00:10:33.973      "r_mbytes_per_sec": 0,
00:10:33.973      "w_mbytes_per_sec": 0
00:10:33.973    },
00:10:33.973    "claimed": false,
00:10:33.973    "zoned": false,
00:10:33.973    "supported_io_types": {
00:10:33.973      "read": true,
00:10:33.973      "write": true,
00:10:33.973      "unmap": false,
00:10:33.973      "flush": false,
00:10:33.973      "reset": true,
00:10:33.973      "nvme_admin": false,
00:10:33.973      "nvme_io": false,
00:10:33.973      "nvme_io_md": false,
00:10:33.973      "write_zeroes": true,
00:10:33.973      "zcopy": false,
00:10:33.973      "get_zone_info": false,
00:10:33.973      "zone_management": false,
00:10:33.973      "zone_append": false,
00:10:33.973      "compare": false,
00:10:33.973      "compare_and_write": false,
00:10:33.973      "abort": false,
00:10:33.973      "seek_hole": false,
00:10:33.973      "seek_data": false,
00:10:33.973      "copy": false,
00:10:33.973      "nvme_iov_md": false
00:10:33.973    },
00:10:33.973    "memory_domains": [
00:10:33.973      {
00:10:33.973        "dma_device_id": "system",
00:10:33.973        "dma_device_type": 1
00:10:33.973      },
00:10:33.973      {
00:10:33.973        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:33.973        "dma_device_type": 2
00:10:33.973      },
00:10:33.973      {
00:10:33.973        "dma_device_id": "system",
00:10:33.973        "dma_device_type": 1
00:10:33.973      },
00:10:33.973      {
00:10:33.973        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:33.973        "dma_device_type": 2
00:10:33.973      },
00:10:33.973      {
00:10:33.973        "dma_device_id": "system",
00:10:33.973        "dma_device_type": 1
00:10:33.973      },
00:10:33.973      {
00:10:33.973        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:33.973        "dma_device_type": 2
00:10:33.973      }
00:10:33.973    ],
00:10:33.973    "driver_specific": {
00:10:33.973      "raid": {
00:10:33.973        "uuid": "0173cc29-8e82-4ff7-a00c-22cc5d415f10",
00:10:33.973        "strip_size_kb": 0,
00:10:33.973        "state": "online",
00:10:33.973        "raid_level": "raid1",
00:10:33.973        "superblock": false,
00:10:33.973        "num_base_bdevs": 3,
00:10:33.973        "num_base_bdevs_discovered": 3,
00:10:33.973        "num_base_bdevs_operational": 3,
00:10:33.973        "base_bdevs_list": [
00:10:33.973          {
00:10:33.973            "name": "NewBaseBdev",
00:10:33.973            "uuid": "e003971d-8937-4fce-848b-c5e5f1e430fd",
00:10:33.973            "is_configured": true,
00:10:33.973            "data_offset": 0,
00:10:33.973            "data_size": 65536
00:10:33.973          },
00:10:33.973          {
00:10:33.973            "name": "BaseBdev2",
00:10:33.973            "uuid": "addbe2a3-7265-4117-a9c6-f72ba93faf9c",
00:10:33.973            "is_configured": true,
00:10:33.973            "data_offset": 0,
00:10:33.973            "data_size": 65536
00:10:33.973          },
00:10:33.973          {
00:10:33.973            "name": "BaseBdev3",
00:10:33.973            "uuid": "0db4c42c-382d-4a39-90a9-c346a3f9fe7a",
00:10:33.973            "is_configured": true,
00:10:33.973            "data_offset": 0,
00:10:33.973            "data_size": 65536
00:10:33.973          }
00:10:33.973        ]
00:10:33.973      }
00:10:33.973    }
00:10:33.973  }'
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:10:33.973  BaseBdev2
00:10:33.973  BaseBdev3'
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:33.973   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:33.973    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.974   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:33.974   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:33.974   11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:33.974    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:10:33.974    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.974    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.974    11:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:33.974    11:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:33.974   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:33.974   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:33.974   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:33.974    11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:10:33.974    11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:33.974    11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:33.974    11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:33.974    11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:34.233   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:34.233   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:34.233   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:34.233   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:34.233   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:34.233  [2024-12-16 11:32:00.053943] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:34.233  [2024-12-16 11:32:00.053975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:34.233  [2024-12-16 11:32:00.054059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:34.233  [2024-12-16 11:32:00.054307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:34.234  [2024-12-16 11:32:00.054318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78754
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78754 ']'
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78754
00:10:34.234    11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:34.234    11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78754
00:10:34.234  killing process with pid 78754
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78754'
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78754
00:10:34.234  [2024-12-16 11:32:00.102782] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:34.234   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78754
00:10:34.234  [2024-12-16 11:32:00.134159] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:10:34.493  
00:10:34.493  real	0m9.012s
00:10:34.493  user	0m15.362s
00:10:34.493  sys	0m1.867s
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:34.493  ************************************
00:10:34.493  END TEST raid_state_function_test
00:10:34.493  ************************************
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:34.493   11:32:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true
00:10:34.493   11:32:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:34.493   11:32:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:34.493   11:32:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:34.493  ************************************
00:10:34.493  START TEST raid_state_function_test_sb
00:10:34.493  ************************************
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:34.493    11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79364
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79364'
00:10:34.493  Process raid pid: 79364
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79364
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79364 ']'
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:34.493  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:34.493   11:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:34.753  [2024-12-16 11:32:00.564338] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:34.753  [2024-12-16 11:32:00.564533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:34.753  [2024-12-16 11:32:00.734384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:34.753  [2024-12-16 11:32:00.784266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:35.012  [2024-12-16 11:32:00.826407] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:35.012  [2024-12-16 11:32:00.826447] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:35.579  [2024-12-16 11:32:01.423566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:35.579  [2024-12-16 11:32:01.423616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:35.579  [2024-12-16 11:32:01.423629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:35.579  [2024-12-16 11:32:01.423639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:35.579  [2024-12-16 11:32:01.423645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:35.579  [2024-12-16 11:32:01.423657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:35.579    11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:35.579    11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:35.579    11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:35.579    11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:35.579    11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:35.579    "name": "Existed_Raid",
00:10:35.579    "uuid": "96eee5cf-49f7-4de4-8947-ac5b669c5bc1",
00:10:35.579    "strip_size_kb": 0,
00:10:35.579    "state": "configuring",
00:10:35.579    "raid_level": "raid1",
00:10:35.579    "superblock": true,
00:10:35.579    "num_base_bdevs": 3,
00:10:35.579    "num_base_bdevs_discovered": 0,
00:10:35.579    "num_base_bdevs_operational": 3,
00:10:35.579    "base_bdevs_list": [
00:10:35.579      {
00:10:35.579        "name": "BaseBdev1",
00:10:35.579        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:35.579        "is_configured": false,
00:10:35.579        "data_offset": 0,
00:10:35.579        "data_size": 0
00:10:35.579      },
00:10:35.579      {
00:10:35.579        "name": "BaseBdev2",
00:10:35.579        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:35.579        "is_configured": false,
00:10:35.579        "data_offset": 0,
00:10:35.579        "data_size": 0
00:10:35.579      },
00:10:35.579      {
00:10:35.579        "name": "BaseBdev3",
00:10:35.579        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:35.579        "is_configured": false,
00:10:35.579        "data_offset": 0,
00:10:35.579        "data_size": 0
00:10:35.579      }
00:10:35.579    ]
00:10:35.579  }'
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:35.579   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:35.838   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:35.838   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:35.838   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:35.838  [2024-12-16 11:32:01.890766] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:35.838  [2024-12-16 11:32:01.890815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:10:35.838   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:35.838   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:35.838   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:35.838   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:35.838  [2024-12-16 11:32:01.898789] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:35.838  [2024-12-16 11:32:01.898828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:35.838  [2024-12-16 11:32:01.898837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:35.838  [2024-12-16 11:32:01.898846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:35.838  [2024-12-16 11:32:01.898852] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:35.838  [2024-12-16 11:32:01.898860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.096  [2024-12-16 11:32:01.915760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:36.096  BaseBdev1
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.096   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.096  [
00:10:36.096  {
00:10:36.096  "name": "BaseBdev1",
00:10:36.096  "aliases": [
00:10:36.096  "c909be9d-46bd-4246-9fe5-c2ebc14d6bb2"
00:10:36.096  ],
00:10:36.096  "product_name": "Malloc disk",
00:10:36.096  "block_size": 512,
00:10:36.096  "num_blocks": 65536,
00:10:36.097  "uuid": "c909be9d-46bd-4246-9fe5-c2ebc14d6bb2",
00:10:36.097  "assigned_rate_limits": {
00:10:36.097  "rw_ios_per_sec": 0,
00:10:36.097  "rw_mbytes_per_sec": 0,
00:10:36.097  "r_mbytes_per_sec": 0,
00:10:36.097  "w_mbytes_per_sec": 0
00:10:36.097  },
00:10:36.097  "claimed": true,
00:10:36.097  "claim_type": "exclusive_write",
00:10:36.097  "zoned": false,
00:10:36.097  "supported_io_types": {
00:10:36.097  "read": true,
00:10:36.097  "write": true,
00:10:36.097  "unmap": true,
00:10:36.097  "flush": true,
00:10:36.097  "reset": true,
00:10:36.097  "nvme_admin": false,
00:10:36.097  "nvme_io": false,
00:10:36.097  "nvme_io_md": false,
00:10:36.097  "write_zeroes": true,
00:10:36.097  "zcopy": true,
00:10:36.097  "get_zone_info": false,
00:10:36.097  "zone_management": false,
00:10:36.097  "zone_append": false,
00:10:36.097  "compare": false,
00:10:36.097  "compare_and_write": false,
00:10:36.097  "abort": true,
00:10:36.097  "seek_hole": false,
00:10:36.097  "seek_data": false,
00:10:36.097  "copy": true,
00:10:36.097  "nvme_iov_md": false
00:10:36.097  },
00:10:36.097  "memory_domains": [
00:10:36.097  {
00:10:36.097  "dma_device_id": "system",
00:10:36.097  "dma_device_type": 1
00:10:36.097  },
00:10:36.097  {
00:10:36.097  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:36.097  "dma_device_type": 2
00:10:36.097  }
00:10:36.097  ],
00:10:36.097  "driver_specific": {}
00:10:36.097  }
00:10:36.097  ]
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:36.097   11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:36.097    11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:36.097    11:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:36.097    11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.097    11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.097    11:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.097   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:36.097    "name": "Existed_Raid",
00:10:36.097    "uuid": "7bfb9e63-3c89-45dd-8852-b2147f967e7f",
00:10:36.097    "strip_size_kb": 0,
00:10:36.097    "state": "configuring",
00:10:36.097    "raid_level": "raid1",
00:10:36.097    "superblock": true,
00:10:36.097    "num_base_bdevs": 3,
00:10:36.097    "num_base_bdevs_discovered": 1,
00:10:36.097    "num_base_bdevs_operational": 3,
00:10:36.097    "base_bdevs_list": [
00:10:36.097      {
00:10:36.097        "name": "BaseBdev1",
00:10:36.097        "uuid": "c909be9d-46bd-4246-9fe5-c2ebc14d6bb2",
00:10:36.097        "is_configured": true,
00:10:36.097        "data_offset": 2048,
00:10:36.097        "data_size": 63488
00:10:36.097      },
00:10:36.097      {
00:10:36.097        "name": "BaseBdev2",
00:10:36.097        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:36.097        "is_configured": false,
00:10:36.097        "data_offset": 0,
00:10:36.097        "data_size": 0
00:10:36.097      },
00:10:36.097      {
00:10:36.097        "name": "BaseBdev3",
00:10:36.097        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:36.097        "is_configured": false,
00:10:36.097        "data_offset": 0,
00:10:36.097        "data_size": 0
00:10:36.097      }
00:10:36.097    ]
00:10:36.097  }'
00:10:36.097   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:36.097   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.356  [2024-12-16 11:32:02.375301] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:36.356  [2024-12-16 11:32:02.375359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.356  [2024-12-16 11:32:02.387294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:36.356  [2024-12-16 11:32:02.389114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:36.356  [2024-12-16 11:32:02.389156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:36.356  [2024-12-16 11:32:02.389166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:36.356  [2024-12-16 11:32:02.389176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:36.356   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:36.357   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:36.357    11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:36.357    11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:36.357    11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.357    11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.357    11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.616   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:36.616    "name": "Existed_Raid",
00:10:36.616    "uuid": "24f170c5-e06d-4f23-a432-27754900fb43",
00:10:36.616    "strip_size_kb": 0,
00:10:36.616    "state": "configuring",
00:10:36.616    "raid_level": "raid1",
00:10:36.616    "superblock": true,
00:10:36.616    "num_base_bdevs": 3,
00:10:36.616    "num_base_bdevs_discovered": 1,
00:10:36.616    "num_base_bdevs_operational": 3,
00:10:36.616    "base_bdevs_list": [
00:10:36.616      {
00:10:36.616        "name": "BaseBdev1",
00:10:36.616        "uuid": "c909be9d-46bd-4246-9fe5-c2ebc14d6bb2",
00:10:36.616        "is_configured": true,
00:10:36.616        "data_offset": 2048,
00:10:36.616        "data_size": 63488
00:10:36.616      },
00:10:36.616      {
00:10:36.616        "name": "BaseBdev2",
00:10:36.616        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:36.616        "is_configured": false,
00:10:36.616        "data_offset": 0,
00:10:36.616        "data_size": 0
00:10:36.616      },
00:10:36.616      {
00:10:36.616        "name": "BaseBdev3",
00:10:36.616        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:36.616        "is_configured": false,
00:10:36.616        "data_offset": 0,
00:10:36.616        "data_size": 0
00:10:36.616      }
00:10:36.616    ]
00:10:36.616  }'
00:10:36.616   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:36.616   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.874  [2024-12-16 11:32:02.894228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:36.874  BaseBdev2
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:36.874  [
00:10:36.874  {
00:10:36.874  "name": "BaseBdev2",
00:10:36.874  "aliases": [
00:10:36.874  "eda438f4-87bc-4bbf-873f-4560ada97db7"
00:10:36.874  ],
00:10:36.874  "product_name": "Malloc disk",
00:10:36.874  "block_size": 512,
00:10:36.874  "num_blocks": 65536,
00:10:36.874  "uuid": "eda438f4-87bc-4bbf-873f-4560ada97db7",
00:10:36.874  "assigned_rate_limits": {
00:10:36.874  "rw_ios_per_sec": 0,
00:10:36.874  "rw_mbytes_per_sec": 0,
00:10:36.874  "r_mbytes_per_sec": 0,
00:10:36.874  "w_mbytes_per_sec": 0
00:10:36.874  },
00:10:36.874  "claimed": true,
00:10:36.874  "claim_type": "exclusive_write",
00:10:36.874  "zoned": false,
00:10:36.874  "supported_io_types": {
00:10:36.874  "read": true,
00:10:36.874  "write": true,
00:10:36.874  "unmap": true,
00:10:36.874  "flush": true,
00:10:36.874  "reset": true,
00:10:36.874  "nvme_admin": false,
00:10:36.874  "nvme_io": false,
00:10:36.874  "nvme_io_md": false,
00:10:36.874  "write_zeroes": true,
00:10:36.874  "zcopy": true,
00:10:36.874  "get_zone_info": false,
00:10:36.874  "zone_management": false,
00:10:36.874  "zone_append": false,
00:10:36.874  "compare": false,
00:10:36.874  "compare_and_write": false,
00:10:36.874  "abort": true,
00:10:36.874  "seek_hole": false,
00:10:36.874  "seek_data": false,
00:10:36.874  "copy": true,
00:10:36.874  "nvme_iov_md": false
00:10:36.874  },
00:10:36.874  "memory_domains": [
00:10:36.874  {
00:10:36.874  "dma_device_id": "system",
00:10:36.874  "dma_device_type": 1
00:10:36.874  },
00:10:36.874  {
00:10:36.874  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:36.874  "dma_device_type": 2
00:10:36.874  }
00:10:36.874  ],
00:10:36.874  "driver_specific": {}
00:10:36.874  }
00:10:36.874  ]
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:36.874   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:37.134    11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:37.134    11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:37.134    11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.134    11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:37.134    11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:37.134   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:37.134    "name": "Existed_Raid",
00:10:37.134    "uuid": "24f170c5-e06d-4f23-a432-27754900fb43",
00:10:37.134    "strip_size_kb": 0,
00:10:37.134    "state": "configuring",
00:10:37.134    "raid_level": "raid1",
00:10:37.134    "superblock": true,
00:10:37.134    "num_base_bdevs": 3,
00:10:37.134    "num_base_bdevs_discovered": 2,
00:10:37.134    "num_base_bdevs_operational": 3,
00:10:37.134    "base_bdevs_list": [
00:10:37.134      {
00:10:37.134        "name": "BaseBdev1",
00:10:37.134        "uuid": "c909be9d-46bd-4246-9fe5-c2ebc14d6bb2",
00:10:37.134        "is_configured": true,
00:10:37.134        "data_offset": 2048,
00:10:37.134        "data_size": 63488
00:10:37.134      },
00:10:37.134      {
00:10:37.134        "name": "BaseBdev2",
00:10:37.134        "uuid": "eda438f4-87bc-4bbf-873f-4560ada97db7",
00:10:37.134        "is_configured": true,
00:10:37.134        "data_offset": 2048,
00:10:37.134        "data_size": 63488
00:10:37.134      },
00:10:37.134      {
00:10:37.134        "name": "BaseBdev3",
00:10:37.134        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:37.134        "is_configured": false,
00:10:37.134        "data_offset": 0,
00:10:37.134        "data_size": 0
00:10:37.134      }
00:10:37.134    ]
00:10:37.134  }'
00:10:37.134   11:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:37.134   11:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.393  [2024-12-16 11:32:03.444599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:37.393  [2024-12-16 11:32:03.444811] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:10:37.393  [2024-12-16 11:32:03.444830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:37.393  BaseBdev3
00:10:37.393  [2024-12-16 11:32:03.445119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:37.393  [2024-12-16 11:32:03.445257] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:10:37.393  [2024-12-16 11:32:03.445274] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:10:37.393  [2024-12-16 11:32:03.445432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:37.393   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.653  [
00:10:37.653  {
00:10:37.653  "name": "BaseBdev3",
00:10:37.653  "aliases": [
00:10:37.653  "717202aa-165a-4169-ac9c-63d411234091"
00:10:37.653  ],
00:10:37.653  "product_name": "Malloc disk",
00:10:37.653  "block_size": 512,
00:10:37.653  "num_blocks": 65536,
00:10:37.653  "uuid": "717202aa-165a-4169-ac9c-63d411234091",
00:10:37.653  "assigned_rate_limits": {
00:10:37.653  "rw_ios_per_sec": 0,
00:10:37.653  "rw_mbytes_per_sec": 0,
00:10:37.653  "r_mbytes_per_sec": 0,
00:10:37.653  "w_mbytes_per_sec": 0
00:10:37.653  },
00:10:37.653  "claimed": true,
00:10:37.653  "claim_type": "exclusive_write",
00:10:37.653  "zoned": false,
00:10:37.653  "supported_io_types": {
00:10:37.653  "read": true,
00:10:37.653  "write": true,
00:10:37.653  "unmap": true,
00:10:37.653  "flush": true,
00:10:37.653  "reset": true,
00:10:37.653  "nvme_admin": false,
00:10:37.653  "nvme_io": false,
00:10:37.653  "nvme_io_md": false,
00:10:37.653  "write_zeroes": true,
00:10:37.653  "zcopy": true,
00:10:37.653  "get_zone_info": false,
00:10:37.653  "zone_management": false,
00:10:37.653  "zone_append": false,
00:10:37.653  "compare": false,
00:10:37.653  "compare_and_write": false,
00:10:37.653  "abort": true,
00:10:37.653  "seek_hole": false,
00:10:37.653  "seek_data": false,
00:10:37.653  "copy": true,
00:10:37.653  "nvme_iov_md": false
00:10:37.653  },
00:10:37.653  "memory_domains": [
00:10:37.653  {
00:10:37.653  "dma_device_id": "system",
00:10:37.653  "dma_device_type": 1
00:10:37.653  },
00:10:37.653  {
00:10:37.653  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:37.653  "dma_device_type": 2
00:10:37.653  }
00:10:37.653  ],
00:10:37.653  "driver_specific": {}
00:10:37.653  }
00:10:37.653  ]
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:37.653    11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:37.653    11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:37.653    11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:37.653    11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.653    11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:37.653   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:37.653    "name": "Existed_Raid",
00:10:37.653    "uuid": "24f170c5-e06d-4f23-a432-27754900fb43",
00:10:37.653    "strip_size_kb": 0,
00:10:37.653    "state": "online",
00:10:37.653    "raid_level": "raid1",
00:10:37.653    "superblock": true,
00:10:37.653    "num_base_bdevs": 3,
00:10:37.653    "num_base_bdevs_discovered": 3,
00:10:37.653    "num_base_bdevs_operational": 3,
00:10:37.653    "base_bdevs_list": [
00:10:37.653      {
00:10:37.653        "name": "BaseBdev1",
00:10:37.653        "uuid": "c909be9d-46bd-4246-9fe5-c2ebc14d6bb2",
00:10:37.653        "is_configured": true,
00:10:37.653        "data_offset": 2048,
00:10:37.653        "data_size": 63488
00:10:37.653      },
00:10:37.653      {
00:10:37.653        "name": "BaseBdev2",
00:10:37.653        "uuid": "eda438f4-87bc-4bbf-873f-4560ada97db7",
00:10:37.653        "is_configured": true,
00:10:37.653        "data_offset": 2048,
00:10:37.653        "data_size": 63488
00:10:37.653      },
00:10:37.653      {
00:10:37.653        "name": "BaseBdev3",
00:10:37.653        "uuid": "717202aa-165a-4169-ac9c-63d411234091",
00:10:37.654        "is_configured": true,
00:10:37.654        "data_offset": 2048,
00:10:37.654        "data_size": 63488
00:10:37.654      }
00:10:37.654    ]
00:10:37.654  }'
00:10:37.654   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:37.654   11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.913   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:10:37.913   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:10:37.913   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:37.913   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:37.913   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:10:37.913   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:37.913    11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:37.913    11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:10:37.913    11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:37.913    11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:37.913  [2024-12-16 11:32:03.972115] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:38.181    11:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.181   11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:38.181    "name": "Existed_Raid",
00:10:38.181    "aliases": [
00:10:38.181      "24f170c5-e06d-4f23-a432-27754900fb43"
00:10:38.181    ],
00:10:38.181    "product_name": "Raid Volume",
00:10:38.181    "block_size": 512,
00:10:38.181    "num_blocks": 63488,
00:10:38.181    "uuid": "24f170c5-e06d-4f23-a432-27754900fb43",
00:10:38.181    "assigned_rate_limits": {
00:10:38.181      "rw_ios_per_sec": 0,
00:10:38.181      "rw_mbytes_per_sec": 0,
00:10:38.181      "r_mbytes_per_sec": 0,
00:10:38.181      "w_mbytes_per_sec": 0
00:10:38.181    },
00:10:38.181    "claimed": false,
00:10:38.181    "zoned": false,
00:10:38.181    "supported_io_types": {
00:10:38.181      "read": true,
00:10:38.181      "write": true,
00:10:38.181      "unmap": false,
00:10:38.181      "flush": false,
00:10:38.181      "reset": true,
00:10:38.181      "nvme_admin": false,
00:10:38.181      "nvme_io": false,
00:10:38.181      "nvme_io_md": false,
00:10:38.181      "write_zeroes": true,
00:10:38.181      "zcopy": false,
00:10:38.181      "get_zone_info": false,
00:10:38.181      "zone_management": false,
00:10:38.181      "zone_append": false,
00:10:38.181      "compare": false,
00:10:38.181      "compare_and_write": false,
00:10:38.181      "abort": false,
00:10:38.181      "seek_hole": false,
00:10:38.181      "seek_data": false,
00:10:38.181      "copy": false,
00:10:38.181      "nvme_iov_md": false
00:10:38.181    },
00:10:38.181    "memory_domains": [
00:10:38.181      {
00:10:38.181        "dma_device_id": "system",
00:10:38.181        "dma_device_type": 1
00:10:38.181      },
00:10:38.181      {
00:10:38.181        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:38.181        "dma_device_type": 2
00:10:38.181      },
00:10:38.181      {
00:10:38.181        "dma_device_id": "system",
00:10:38.181        "dma_device_type": 1
00:10:38.181      },
00:10:38.181      {
00:10:38.181        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:38.181        "dma_device_type": 2
00:10:38.181      },
00:10:38.181      {
00:10:38.181        "dma_device_id": "system",
00:10:38.181        "dma_device_type": 1
00:10:38.181      },
00:10:38.181      {
00:10:38.181        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:38.181        "dma_device_type": 2
00:10:38.181      }
00:10:38.181    ],
00:10:38.181    "driver_specific": {
00:10:38.181      "raid": {
00:10:38.181        "uuid": "24f170c5-e06d-4f23-a432-27754900fb43",
00:10:38.181        "strip_size_kb": 0,
00:10:38.181        "state": "online",
00:10:38.181        "raid_level": "raid1",
00:10:38.181        "superblock": true,
00:10:38.181        "num_base_bdevs": 3,
00:10:38.181        "num_base_bdevs_discovered": 3,
00:10:38.181        "num_base_bdevs_operational": 3,
00:10:38.181        "base_bdevs_list": [
00:10:38.181          {
00:10:38.181            "name": "BaseBdev1",
00:10:38.181            "uuid": "c909be9d-46bd-4246-9fe5-c2ebc14d6bb2",
00:10:38.182            "is_configured": true,
00:10:38.182            "data_offset": 2048,
00:10:38.182            "data_size": 63488
00:10:38.182          },
00:10:38.182          {
00:10:38.182            "name": "BaseBdev2",
00:10:38.182            "uuid": "eda438f4-87bc-4bbf-873f-4560ada97db7",
00:10:38.182            "is_configured": true,
00:10:38.182            "data_offset": 2048,
00:10:38.182            "data_size": 63488
00:10:38.182          },
00:10:38.182          {
00:10:38.182            "name": "BaseBdev3",
00:10:38.182            "uuid": "717202aa-165a-4169-ac9c-63d411234091",
00:10:38.182            "is_configured": true,
00:10:38.182            "data_offset": 2048,
00:10:38.182            "data_size": 63488
00:10:38.182          }
00:10:38.182        ]
00:10:38.182      }
00:10:38.182    }
00:10:38.182  }'
00:10:38.182    11:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:10:38.182  BaseBdev2
00:10:38.182  BaseBdev3'
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.182  [2024-12-16 11:32:04.211496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:38.182   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.182    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.457    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.457   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:38.457    "name": "Existed_Raid",
00:10:38.457    "uuid": "24f170c5-e06d-4f23-a432-27754900fb43",
00:10:38.457    "strip_size_kb": 0,
00:10:38.457    "state": "online",
00:10:38.457    "raid_level": "raid1",
00:10:38.457    "superblock": true,
00:10:38.457    "num_base_bdevs": 3,
00:10:38.457    "num_base_bdevs_discovered": 2,
00:10:38.457    "num_base_bdevs_operational": 2,
00:10:38.457    "base_bdevs_list": [
00:10:38.457      {
00:10:38.457        "name": null,
00:10:38.457        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:38.457        "is_configured": false,
00:10:38.457        "data_offset": 0,
00:10:38.457        "data_size": 63488
00:10:38.457      },
00:10:38.457      {
00:10:38.457        "name": "BaseBdev2",
00:10:38.457        "uuid": "eda438f4-87bc-4bbf-873f-4560ada97db7",
00:10:38.457        "is_configured": true,
00:10:38.457        "data_offset": 2048,
00:10:38.457        "data_size": 63488
00:10:38.457      },
00:10:38.457      {
00:10:38.457        "name": "BaseBdev3",
00:10:38.457        "uuid": "717202aa-165a-4169-ac9c-63d411234091",
00:10:38.457        "is_configured": true,
00:10:38.457        "data_offset": 2048,
00:10:38.457        "data_size": 63488
00:10:38.457      }
00:10:38.457    ]
00:10:38.457  }'
00:10:38.457   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:38.457   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.716   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:10:38.716   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:38.716    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:38.716    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.716    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.716    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:10:38.716    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.716   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:10:38.716   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:10:38.716   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:10:38.716   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.717   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.717  [2024-12-16 11:32:04.770740] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.976  [2024-12-16 11:32:04.842182] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:10:38.976  [2024-12-16 11:32:04.842291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:38.976  [2024-12-16 11:32:04.854148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:38.976  [2024-12-16 11:32:04.854211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:38.976  [2024-12-16 11:32:04.854227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.976    11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.976  BaseBdev2
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.976   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.976  [
00:10:38.976  {
00:10:38.976  "name": "BaseBdev2",
00:10:38.976  "aliases": [
00:10:38.976  "3f015ed7-3144-42be-8b09-92cf235c83c8"
00:10:38.976  ],
00:10:38.976  "product_name": "Malloc disk",
00:10:38.976  "block_size": 512,
00:10:38.976  "num_blocks": 65536,
00:10:38.976  "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:38.976  "assigned_rate_limits": {
00:10:38.976  "rw_ios_per_sec": 0,
00:10:38.976  "rw_mbytes_per_sec": 0,
00:10:38.976  "r_mbytes_per_sec": 0,
00:10:38.976  "w_mbytes_per_sec": 0
00:10:38.976  },
00:10:38.976  "claimed": false,
00:10:38.976  "zoned": false,
00:10:38.977  "supported_io_types": {
00:10:38.977  "read": true,
00:10:38.977  "write": true,
00:10:38.977  "unmap": true,
00:10:38.977  "flush": true,
00:10:38.977  "reset": true,
00:10:38.977  "nvme_admin": false,
00:10:38.977  "nvme_io": false,
00:10:38.977  "nvme_io_md": false,
00:10:38.977  "write_zeroes": true,
00:10:38.977  "zcopy": true,
00:10:38.977  "get_zone_info": false,
00:10:38.977  "zone_management": false,
00:10:38.977  "zone_append": false,
00:10:38.977  "compare": false,
00:10:38.977  "compare_and_write": false,
00:10:38.977  "abort": true,
00:10:38.977  "seek_hole": false,
00:10:38.977  "seek_data": false,
00:10:38.977  "copy": true,
00:10:38.977  "nvme_iov_md": false
00:10:38.977  },
00:10:38.977  "memory_domains": [
00:10:38.977  {
00:10:38.977  "dma_device_id": "system",
00:10:38.977  "dma_device_type": 1
00:10:38.977  },
00:10:38.977  {
00:10:38.977  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:38.977  "dma_device_type": 2
00:10:38.977  }
00:10:38.977  ],
00:10:38.977  "driver_specific": {}
00:10:38.977  }
00:10:38.977  ]
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.977  BaseBdev3
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:10:38.977   11:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.977  [
00:10:38.977  {
00:10:38.977  "name": "BaseBdev3",
00:10:38.977  "aliases": [
00:10:38.977  "d04fb5bb-69cf-4bea-a68f-daf093beadc8"
00:10:38.977  ],
00:10:38.977  "product_name": "Malloc disk",
00:10:38.977  "block_size": 512,
00:10:38.977  "num_blocks": 65536,
00:10:38.977  "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:38.977  "assigned_rate_limits": {
00:10:38.977  "rw_ios_per_sec": 0,
00:10:38.977  "rw_mbytes_per_sec": 0,
00:10:38.977  "r_mbytes_per_sec": 0,
00:10:38.977  "w_mbytes_per_sec": 0
00:10:38.977  },
00:10:38.977  "claimed": false,
00:10:38.977  "zoned": false,
00:10:38.977  "supported_io_types": {
00:10:38.977  "read": true,
00:10:38.977  "write": true,
00:10:38.977  "unmap": true,
00:10:38.977  "flush": true,
00:10:38.977  "reset": true,
00:10:38.977  "nvme_admin": false,
00:10:38.977  "nvme_io": false,
00:10:38.977  "nvme_io_md": false,
00:10:38.977  "write_zeroes": true,
00:10:38.977  "zcopy": true,
00:10:38.977  "get_zone_info": false,
00:10:38.977  "zone_management": false,
00:10:38.977  "zone_append": false,
00:10:38.977  "compare": false,
00:10:38.977  "compare_and_write": false,
00:10:38.977  "abort": true,
00:10:38.977  "seek_hole": false,
00:10:38.977  "seek_data": false,
00:10:38.977  "copy": true,
00:10:38.977  "nvme_iov_md": false
00:10:38.977  },
00:10:38.977  "memory_domains": [
00:10:38.977  {
00:10:38.977  "dma_device_id": "system",
00:10:38.977  "dma_device_type": 1
00:10:38.977  },
00:10:38.977  {
00:10:38.977  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:38.977  "dma_device_type": 2
00:10:38.977  }
00:10:38.977  ],
00:10:38.977  "driver_specific": {}
00:10:38.977  }
00:10:38.977  ]
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:38.977  [2024-12-16 11:32:05.031434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:38.977  [2024-12-16 11:32:05.031487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:38.977  [2024-12-16 11:32:05.031510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:38.977  [2024-12-16 11:32:05.033600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:38.977   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:39.248    11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:39.248    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:39.248    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:39.248    11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:39.248    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:39.248   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:39.248    "name": "Existed_Raid",
00:10:39.248    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:39.248    "strip_size_kb": 0,
00:10:39.248    "state": "configuring",
00:10:39.248    "raid_level": "raid1",
00:10:39.248    "superblock": true,
00:10:39.248    "num_base_bdevs": 3,
00:10:39.248    "num_base_bdevs_discovered": 2,
00:10:39.248    "num_base_bdevs_operational": 3,
00:10:39.248    "base_bdevs_list": [
00:10:39.248      {
00:10:39.248        "name": "BaseBdev1",
00:10:39.248        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:39.248        "is_configured": false,
00:10:39.248        "data_offset": 0,
00:10:39.248        "data_size": 0
00:10:39.248      },
00:10:39.248      {
00:10:39.248        "name": "BaseBdev2",
00:10:39.248        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:39.248        "is_configured": true,
00:10:39.248        "data_offset": 2048,
00:10:39.248        "data_size": 63488
00:10:39.248      },
00:10:39.248      {
00:10:39.248        "name": "BaseBdev3",
00:10:39.248        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:39.248        "is_configured": true,
00:10:39.248        "data_offset": 2048,
00:10:39.248        "data_size": 63488
00:10:39.248      }
00:10:39.248    ]
00:10:39.248  }'
00:10:39.248   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:39.248   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:39.508  [2024-12-16 11:32:05.458745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:39.508    11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:39.508    11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:39.508    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:39.508    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:39.508    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:39.508    "name": "Existed_Raid",
00:10:39.508    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:39.508    "strip_size_kb": 0,
00:10:39.508    "state": "configuring",
00:10:39.508    "raid_level": "raid1",
00:10:39.508    "superblock": true,
00:10:39.508    "num_base_bdevs": 3,
00:10:39.508    "num_base_bdevs_discovered": 1,
00:10:39.508    "num_base_bdevs_operational": 3,
00:10:39.508    "base_bdevs_list": [
00:10:39.508      {
00:10:39.508        "name": "BaseBdev1",
00:10:39.508        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:39.508        "is_configured": false,
00:10:39.508        "data_offset": 0,
00:10:39.508        "data_size": 0
00:10:39.508      },
00:10:39.508      {
00:10:39.508        "name": null,
00:10:39.508        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:39.508        "is_configured": false,
00:10:39.508        "data_offset": 0,
00:10:39.508        "data_size": 63488
00:10:39.508      },
00:10:39.508      {
00:10:39.508        "name": "BaseBdev3",
00:10:39.508        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:39.508        "is_configured": true,
00:10:39.508        "data_offset": 2048,
00:10:39.508        "data_size": 63488
00:10:39.508      }
00:10:39.508    ]
00:10:39.508  }'
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:39.508   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.077    11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:40.077    11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:40.077    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.077    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.077    11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.077  [2024-12-16 11:32:05.980837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:40.077  BaseBdev1
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.077   11:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.077  [
00:10:40.077  {
00:10:40.077  "name": "BaseBdev1",
00:10:40.077  "aliases": [
00:10:40.077  "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab"
00:10:40.077  ],
00:10:40.077  "product_name": "Malloc disk",
00:10:40.077  "block_size": 512,
00:10:40.077  "num_blocks": 65536,
00:10:40.077  "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:40.077  "assigned_rate_limits": {
00:10:40.077  "rw_ios_per_sec": 0,
00:10:40.077  "rw_mbytes_per_sec": 0,
00:10:40.077  "r_mbytes_per_sec": 0,
00:10:40.077  "w_mbytes_per_sec": 0
00:10:40.077  },
00:10:40.077  "claimed": true,
00:10:40.077  "claim_type": "exclusive_write",
00:10:40.077  "zoned": false,
00:10:40.077  "supported_io_types": {
00:10:40.077  "read": true,
00:10:40.077  "write": true,
00:10:40.077  "unmap": true,
00:10:40.077  "flush": true,
00:10:40.077  "reset": true,
00:10:40.077  "nvme_admin": false,
00:10:40.077  "nvme_io": false,
00:10:40.077  "nvme_io_md": false,
00:10:40.077  "write_zeroes": true,
00:10:40.077  "zcopy": true,
00:10:40.077  "get_zone_info": false,
00:10:40.077  "zone_management": false,
00:10:40.077  "zone_append": false,
00:10:40.077  "compare": false,
00:10:40.077  "compare_and_write": false,
00:10:40.077  "abort": true,
00:10:40.077  "seek_hole": false,
00:10:40.077  "seek_data": false,
00:10:40.077  "copy": true,
00:10:40.077  "nvme_iov_md": false
00:10:40.077  },
00:10:40.077  "memory_domains": [
00:10:40.077  {
00:10:40.077  "dma_device_id": "system",
00:10:40.077  "dma_device_type": 1
00:10:40.077  },
00:10:40.077  {
00:10:40.077  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:40.077  "dma_device_type": 2
00:10:40.077  }
00:10:40.077  ],
00:10:40.077  "driver_specific": {}
00:10:40.077  }
00:10:40.077  ]
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:40.077    11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:40.077    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.077    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.077    11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:40.077    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:40.077    "name": "Existed_Raid",
00:10:40.077    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:40.077    "strip_size_kb": 0,
00:10:40.077    "state": "configuring",
00:10:40.077    "raid_level": "raid1",
00:10:40.077    "superblock": true,
00:10:40.077    "num_base_bdevs": 3,
00:10:40.077    "num_base_bdevs_discovered": 2,
00:10:40.077    "num_base_bdevs_operational": 3,
00:10:40.077    "base_bdevs_list": [
00:10:40.077      {
00:10:40.077        "name": "BaseBdev1",
00:10:40.077        "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:40.077        "is_configured": true,
00:10:40.077        "data_offset": 2048,
00:10:40.077        "data_size": 63488
00:10:40.077      },
00:10:40.077      {
00:10:40.077        "name": null,
00:10:40.077        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:40.077        "is_configured": false,
00:10:40.077        "data_offset": 0,
00:10:40.077        "data_size": 63488
00:10:40.077      },
00:10:40.077      {
00:10:40.077        "name": "BaseBdev3",
00:10:40.077        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:40.077        "is_configured": true,
00:10:40.077        "data_offset": 2048,
00:10:40.077        "data_size": 63488
00:10:40.077      }
00:10:40.077    ]
00:10:40.077  }'
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:40.077   11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.647  [2024-12-16 11:32:06.535962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:40.647    11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:40.647    "name": "Existed_Raid",
00:10:40.647    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:40.647    "strip_size_kb": 0,
00:10:40.647    "state": "configuring",
00:10:40.647    "raid_level": "raid1",
00:10:40.647    "superblock": true,
00:10:40.647    "num_base_bdevs": 3,
00:10:40.647    "num_base_bdevs_discovered": 1,
00:10:40.647    "num_base_bdevs_operational": 3,
00:10:40.647    "base_bdevs_list": [
00:10:40.647      {
00:10:40.647        "name": "BaseBdev1",
00:10:40.647        "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:40.647        "is_configured": true,
00:10:40.647        "data_offset": 2048,
00:10:40.647        "data_size": 63488
00:10:40.647      },
00:10:40.647      {
00:10:40.647        "name": null,
00:10:40.647        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:40.647        "is_configured": false,
00:10:40.647        "data_offset": 0,
00:10:40.647        "data_size": 63488
00:10:40.647      },
00:10:40.647      {
00:10:40.647        "name": null,
00:10:40.647        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:40.647        "is_configured": false,
00:10:40.647        "data_offset": 0,
00:10:40.647        "data_size": 63488
00:10:40.647      }
00:10:40.647    ]
00:10:40.647  }'
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:40.647   11:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.215  [2024-12-16 11:32:07.079121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.215    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:41.215   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:41.215    "name": "Existed_Raid",
00:10:41.215    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:41.215    "strip_size_kb": 0,
00:10:41.215    "state": "configuring",
00:10:41.215    "raid_level": "raid1",
00:10:41.215    "superblock": true,
00:10:41.215    "num_base_bdevs": 3,
00:10:41.215    "num_base_bdevs_discovered": 2,
00:10:41.215    "num_base_bdevs_operational": 3,
00:10:41.215    "base_bdevs_list": [
00:10:41.215      {
00:10:41.215        "name": "BaseBdev1",
00:10:41.216        "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:41.216        "is_configured": true,
00:10:41.216        "data_offset": 2048,
00:10:41.216        "data_size": 63488
00:10:41.216      },
00:10:41.216      {
00:10:41.216        "name": null,
00:10:41.216        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:41.216        "is_configured": false,
00:10:41.216        "data_offset": 0,
00:10:41.216        "data_size": 63488
00:10:41.216      },
00:10:41.216      {
00:10:41.216        "name": "BaseBdev3",
00:10:41.216        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:41.216        "is_configured": true,
00:10:41.216        "data_offset": 2048,
00:10:41.216        "data_size": 63488
00:10:41.216      }
00:10:41.216    ]
00:10:41.216  }'
00:10:41.216   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:41.216   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.784  [2024-12-16 11:32:07.594229] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:41.784    11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:41.784    "name": "Existed_Raid",
00:10:41.784    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:41.784    "strip_size_kb": 0,
00:10:41.784    "state": "configuring",
00:10:41.784    "raid_level": "raid1",
00:10:41.784    "superblock": true,
00:10:41.784    "num_base_bdevs": 3,
00:10:41.784    "num_base_bdevs_discovered": 1,
00:10:41.784    "num_base_bdevs_operational": 3,
00:10:41.784    "base_bdevs_list": [
00:10:41.784      {
00:10:41.784        "name": null,
00:10:41.784        "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:41.784        "is_configured": false,
00:10:41.784        "data_offset": 0,
00:10:41.784        "data_size": 63488
00:10:41.784      },
00:10:41.784      {
00:10:41.784        "name": null,
00:10:41.784        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:41.784        "is_configured": false,
00:10:41.784        "data_offset": 0,
00:10:41.784        "data_size": 63488
00:10:41.784      },
00:10:41.784      {
00:10:41.784        "name": "BaseBdev3",
00:10:41.784        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:41.784        "is_configured": true,
00:10:41.784        "data_offset": 2048,
00:10:41.784        "data_size": 63488
00:10:41.784      }
00:10:41.784    ]
00:10:41.784  }'
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:41.784   11:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.356  [2024-12-16 11:32:08.164111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.356    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:42.356    "name": "Existed_Raid",
00:10:42.356    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:42.356    "strip_size_kb": 0,
00:10:42.356    "state": "configuring",
00:10:42.356    "raid_level": "raid1",
00:10:42.356    "superblock": true,
00:10:42.356    "num_base_bdevs": 3,
00:10:42.356    "num_base_bdevs_discovered": 2,
00:10:42.356    "num_base_bdevs_operational": 3,
00:10:42.356    "base_bdevs_list": [
00:10:42.356      {
00:10:42.356        "name": null,
00:10:42.356        "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:42.356        "is_configured": false,
00:10:42.356        "data_offset": 0,
00:10:42.356        "data_size": 63488
00:10:42.356      },
00:10:42.356      {
00:10:42.356        "name": "BaseBdev2",
00:10:42.356        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:42.356        "is_configured": true,
00:10:42.356        "data_offset": 2048,
00:10:42.356        "data_size": 63488
00:10:42.356      },
00:10:42.356      {
00:10:42.356        "name": "BaseBdev3",
00:10:42.356        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:42.356        "is_configured": true,
00:10:42.356        "data_offset": 2048,
00:10:42.356        "data_size": 63488
00:10:42.356      }
00:10:42.356    ]
00:10:42.356  }'
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:42.356   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.616   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:10:42.616    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2c4c6118-6d56-4a4b-ab5a-eff1626f90ab
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.876  [2024-12-16 11:32:08.722358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:10:42.876  [2024-12-16 11:32:08.722560] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:42.876  [2024-12-16 11:32:08.722574] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:42.876  [2024-12-16 11:32:08.722836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:10:42.876  [2024-12-16 11:32:08.722995] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:42.876  [2024-12-16 11:32:08.723009] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:10:42.876  NewBaseBdev
00:10:42.876  [2024-12-16 11:32:08.723108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.876  [
00:10:42.876  {
00:10:42.876  "name": "NewBaseBdev",
00:10:42.876  "aliases": [
00:10:42.876  "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab"
00:10:42.876  ],
00:10:42.876  "product_name": "Malloc disk",
00:10:42.876  "block_size": 512,
00:10:42.876  "num_blocks": 65536,
00:10:42.876  "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:42.876  "assigned_rate_limits": {
00:10:42.876  "rw_ios_per_sec": 0,
00:10:42.876  "rw_mbytes_per_sec": 0,
00:10:42.876  "r_mbytes_per_sec": 0,
00:10:42.876  "w_mbytes_per_sec": 0
00:10:42.876  },
00:10:42.876  "claimed": true,
00:10:42.876  "claim_type": "exclusive_write",
00:10:42.876  "zoned": false,
00:10:42.876  "supported_io_types": {
00:10:42.876  "read": true,
00:10:42.876  "write": true,
00:10:42.876  "unmap": true,
00:10:42.876  "flush": true,
00:10:42.876  "reset": true,
00:10:42.876  "nvme_admin": false,
00:10:42.876  "nvme_io": false,
00:10:42.876  "nvme_io_md": false,
00:10:42.876  "write_zeroes": true,
00:10:42.876  "zcopy": true,
00:10:42.876  "get_zone_info": false,
00:10:42.876  "zone_management": false,
00:10:42.876  "zone_append": false,
00:10:42.876  "compare": false,
00:10:42.876  "compare_and_write": false,
00:10:42.876  "abort": true,
00:10:42.876  "seek_hole": false,
00:10:42.876  "seek_data": false,
00:10:42.876  "copy": true,
00:10:42.876  "nvme_iov_md": false
00:10:42.876  },
00:10:42.876  "memory_domains": [
00:10:42.876  {
00:10:42.876  "dma_device_id": "system",
00:10:42.876  "dma_device_type": 1
00:10:42.876  },
00:10:42.876  {
00:10:42.876  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:42.876  "dma_device_type": 2
00:10:42.876  }
00:10:42.876  ],
00:10:42.876  "driver_specific": {}
00:10:42.876  }
00:10:42.876  ]
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:42.876    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:42.876    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:42.876    11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:42.876    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:42.876    11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:42.876   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:42.876    "name": "Existed_Raid",
00:10:42.876    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:42.876    "strip_size_kb": 0,
00:10:42.876    "state": "online",
00:10:42.876    "raid_level": "raid1",
00:10:42.876    "superblock": true,
00:10:42.876    "num_base_bdevs": 3,
00:10:42.876    "num_base_bdevs_discovered": 3,
00:10:42.876    "num_base_bdevs_operational": 3,
00:10:42.876    "base_bdevs_list": [
00:10:42.876      {
00:10:42.877        "name": "NewBaseBdev",
00:10:42.877        "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:42.877        "is_configured": true,
00:10:42.877        "data_offset": 2048,
00:10:42.877        "data_size": 63488
00:10:42.877      },
00:10:42.877      {
00:10:42.877        "name": "BaseBdev2",
00:10:42.877        "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:42.877        "is_configured": true,
00:10:42.877        "data_offset": 2048,
00:10:42.877        "data_size": 63488
00:10:42.877      },
00:10:42.877      {
00:10:42.877        "name": "BaseBdev3",
00:10:42.877        "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:42.877        "is_configured": true,
00:10:42.877        "data_offset": 2048,
00:10:42.877        "data_size": 63488
00:10:42.877      }
00:10:42.877    ]
00:10:42.877  }'
00:10:42.877   11:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:42.877   11:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:43.136   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:10:43.136   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:10:43.136   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:43.136   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:43.136   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:10:43.136   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:43.136    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:10:43.136    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:43.136    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:43.136    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:43.136  [2024-12-16 11:32:09.173971] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:43.136    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:43.396    "name": "Existed_Raid",
00:10:43.396    "aliases": [
00:10:43.396      "4fd1a52a-f103-45fd-a25a-31195b0590ac"
00:10:43.396    ],
00:10:43.396    "product_name": "Raid Volume",
00:10:43.396    "block_size": 512,
00:10:43.396    "num_blocks": 63488,
00:10:43.396    "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:43.396    "assigned_rate_limits": {
00:10:43.396      "rw_ios_per_sec": 0,
00:10:43.396      "rw_mbytes_per_sec": 0,
00:10:43.396      "r_mbytes_per_sec": 0,
00:10:43.396      "w_mbytes_per_sec": 0
00:10:43.396    },
00:10:43.396    "claimed": false,
00:10:43.396    "zoned": false,
00:10:43.396    "supported_io_types": {
00:10:43.396      "read": true,
00:10:43.396      "write": true,
00:10:43.396      "unmap": false,
00:10:43.396      "flush": false,
00:10:43.396      "reset": true,
00:10:43.396      "nvme_admin": false,
00:10:43.396      "nvme_io": false,
00:10:43.396      "nvme_io_md": false,
00:10:43.396      "write_zeroes": true,
00:10:43.396      "zcopy": false,
00:10:43.396      "get_zone_info": false,
00:10:43.396      "zone_management": false,
00:10:43.396      "zone_append": false,
00:10:43.396      "compare": false,
00:10:43.396      "compare_and_write": false,
00:10:43.396      "abort": false,
00:10:43.396      "seek_hole": false,
00:10:43.396      "seek_data": false,
00:10:43.396      "copy": false,
00:10:43.396      "nvme_iov_md": false
00:10:43.396    },
00:10:43.396    "memory_domains": [
00:10:43.396      {
00:10:43.396        "dma_device_id": "system",
00:10:43.396        "dma_device_type": 1
00:10:43.396      },
00:10:43.396      {
00:10:43.396        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:43.396        "dma_device_type": 2
00:10:43.396      },
00:10:43.396      {
00:10:43.396        "dma_device_id": "system",
00:10:43.396        "dma_device_type": 1
00:10:43.396      },
00:10:43.396      {
00:10:43.396        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:43.396        "dma_device_type": 2
00:10:43.396      },
00:10:43.396      {
00:10:43.396        "dma_device_id": "system",
00:10:43.396        "dma_device_type": 1
00:10:43.396      },
00:10:43.396      {
00:10:43.396        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:43.396        "dma_device_type": 2
00:10:43.396      }
00:10:43.396    ],
00:10:43.396    "driver_specific": {
00:10:43.396      "raid": {
00:10:43.396        "uuid": "4fd1a52a-f103-45fd-a25a-31195b0590ac",
00:10:43.396        "strip_size_kb": 0,
00:10:43.396        "state": "online",
00:10:43.396        "raid_level": "raid1",
00:10:43.396        "superblock": true,
00:10:43.396        "num_base_bdevs": 3,
00:10:43.396        "num_base_bdevs_discovered": 3,
00:10:43.396        "num_base_bdevs_operational": 3,
00:10:43.396        "base_bdevs_list": [
00:10:43.396          {
00:10:43.396            "name": "NewBaseBdev",
00:10:43.396            "uuid": "2c4c6118-6d56-4a4b-ab5a-eff1626f90ab",
00:10:43.396            "is_configured": true,
00:10:43.396            "data_offset": 2048,
00:10:43.396            "data_size": 63488
00:10:43.396          },
00:10:43.396          {
00:10:43.396            "name": "BaseBdev2",
00:10:43.396            "uuid": "3f015ed7-3144-42be-8b09-92cf235c83c8",
00:10:43.396            "is_configured": true,
00:10:43.396            "data_offset": 2048,
00:10:43.396            "data_size": 63488
00:10:43.396          },
00:10:43.396          {
00:10:43.396            "name": "BaseBdev3",
00:10:43.396            "uuid": "d04fb5bb-69cf-4bea-a68f-daf093beadc8",
00:10:43.396            "is_configured": true,
00:10:43.396            "data_offset": 2048,
00:10:43.396            "data_size": 63488
00:10:43.396          }
00:10:43.396        ]
00:10:43.396      }
00:10:43.396    }
00:10:43.396  }'
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:10:43.396  BaseBdev2
00:10:43.396  BaseBdev3'
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:43.396    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:43.396   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:43.397    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:10:43.397    11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:43.397    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:43.397    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:43.397    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:43.397  [2024-12-16 11:32:09.441205] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:43.397  [2024-12-16 11:32:09.441239] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:43.397  [2024-12-16 11:32:09.441317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:43.397  [2024-12-16 11:32:09.441632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:43.397  [2024-12-16 11:32:09.441648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79364
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79364 ']'
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79364
00:10:43.397    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:10:43.397   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:43.397    11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79364
00:10:43.656  killing process with pid 79364
00:10:43.656   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:43.656   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:43.656   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79364'
00:10:43.656   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79364
00:10:43.656  [2024-12-16 11:32:09.489381] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:43.656   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79364
00:10:43.656  [2024-12-16 11:32:09.521195] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:43.915   11:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:10:43.915  
00:10:43.915  real	0m9.324s
00:10:43.915  user	0m15.893s
00:10:43.915  sys	0m2.008s
00:10:43.915   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:43.915   11:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:10:43.915  ************************************
00:10:43.915  END TEST raid_state_function_test_sb
00:10:43.915  ************************************
00:10:43.915   11:32:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3
00:10:43.915   11:32:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:10:43.915   11:32:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:43.915   11:32:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:43.915  ************************************
00:10:43.915  START TEST raid_superblock_test
00:10:43.915  ************************************
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']'
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79973
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79973
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79973 ']'
00:10:43.915  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:43.915   11:32:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:43.915  [2024-12-16 11:32:09.943787] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:43.915  [2024-12-16 11:32:09.944015] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79973 ]
00:10:44.175  [2024-12-16 11:32:10.108318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:44.175  [2024-12-16 11:32:10.158326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:44.175  [2024-12-16 11:32:10.202323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:44.175  [2024-12-16 11:32:10.202362] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:10:44.743   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:10:45.002   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003  malloc1
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003  [2024-12-16 11:32:10.829489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:10:45.003  [2024-12-16 11:32:10.829660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:45.003  [2024-12-16 11:32:10.829719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:10:45.003  [2024-12-16 11:32:10.829761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:45.003  [2024-12-16 11:32:10.832202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:45.003  [2024-12-16 11:32:10.832290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:10:45.003  pt1
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003  malloc2
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003  [2024-12-16 11:32:10.869441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:10:45.003  [2024-12-16 11:32:10.869646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:45.003  [2024-12-16 11:32:10.869677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:10:45.003  [2024-12-16 11:32:10.869694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:45.003  [2024-12-16 11:32:10.872811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:45.003  [2024-12-16 11:32:10.872890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:10:45.003  pt2
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003  malloc3
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003  [2024-12-16 11:32:10.898364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:10:45.003  [2024-12-16 11:32:10.898497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:45.003  [2024-12-16 11:32:10.898544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:10:45.003  [2024-12-16 11:32:10.898580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:45.003  [2024-12-16 11:32:10.900833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:45.003  [2024-12-16 11:32:10.900913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:10:45.003  pt3
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003  [2024-12-16 11:32:10.910379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:10:45.003  [2024-12-16 11:32:10.912408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:45.003  [2024-12-16 11:32:10.912546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:10:45.003  [2024-12-16 11:32:10.912747] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:10:45.003  [2024-12-16 11:32:10.912799] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:45.003  [2024-12-16 11:32:10.913076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:45.003  [2024-12-16 11:32:10.913260] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:10:45.003  [2024-12-16 11:32:10.913333] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:10:45.003  [2024-12-16 11:32:10.913499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:45.003    11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:45.003    11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:45.003    11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.003    11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.003    11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.003   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:45.003    "name": "raid_bdev1",
00:10:45.003    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:45.003    "strip_size_kb": 0,
00:10:45.003    "state": "online",
00:10:45.003    "raid_level": "raid1",
00:10:45.003    "superblock": true,
00:10:45.003    "num_base_bdevs": 3,
00:10:45.003    "num_base_bdevs_discovered": 3,
00:10:45.003    "num_base_bdevs_operational": 3,
00:10:45.003    "base_bdevs_list": [
00:10:45.003      {
00:10:45.003        "name": "pt1",
00:10:45.003        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:45.003        "is_configured": true,
00:10:45.004        "data_offset": 2048,
00:10:45.004        "data_size": 63488
00:10:45.004      },
00:10:45.004      {
00:10:45.004        "name": "pt2",
00:10:45.004        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:45.004        "is_configured": true,
00:10:45.004        "data_offset": 2048,
00:10:45.004        "data_size": 63488
00:10:45.004      },
00:10:45.004      {
00:10:45.004        "name": "pt3",
00:10:45.004        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:45.004        "is_configured": true,
00:10:45.004        "data_offset": 2048,
00:10:45.004        "data_size": 63488
00:10:45.004      }
00:10:45.004    ]
00:10:45.004  }'
00:10:45.004   11:32:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:45.004   11:32:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.571   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:10:45.571   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:10:45.571   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:45.571   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:45.571   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:10:45.571   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:45.571    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:45.571    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:45.571    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.571    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.571  [2024-12-16 11:32:11.345976] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:45.571    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.571   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:45.571    "name": "raid_bdev1",
00:10:45.571    "aliases": [
00:10:45.571      "6427a26b-1ddf-485f-8e31-8b85e06eb664"
00:10:45.571    ],
00:10:45.571    "product_name": "Raid Volume",
00:10:45.571    "block_size": 512,
00:10:45.571    "num_blocks": 63488,
00:10:45.571    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:45.571    "assigned_rate_limits": {
00:10:45.571      "rw_ios_per_sec": 0,
00:10:45.571      "rw_mbytes_per_sec": 0,
00:10:45.571      "r_mbytes_per_sec": 0,
00:10:45.571      "w_mbytes_per_sec": 0
00:10:45.571    },
00:10:45.571    "claimed": false,
00:10:45.571    "zoned": false,
00:10:45.571    "supported_io_types": {
00:10:45.571      "read": true,
00:10:45.571      "write": true,
00:10:45.571      "unmap": false,
00:10:45.571      "flush": false,
00:10:45.571      "reset": true,
00:10:45.571      "nvme_admin": false,
00:10:45.571      "nvme_io": false,
00:10:45.571      "nvme_io_md": false,
00:10:45.571      "write_zeroes": true,
00:10:45.571      "zcopy": false,
00:10:45.571      "get_zone_info": false,
00:10:45.571      "zone_management": false,
00:10:45.571      "zone_append": false,
00:10:45.571      "compare": false,
00:10:45.571      "compare_and_write": false,
00:10:45.571      "abort": false,
00:10:45.571      "seek_hole": false,
00:10:45.571      "seek_data": false,
00:10:45.571      "copy": false,
00:10:45.571      "nvme_iov_md": false
00:10:45.571    },
00:10:45.571    "memory_domains": [
00:10:45.571      {
00:10:45.571        "dma_device_id": "system",
00:10:45.571        "dma_device_type": 1
00:10:45.571      },
00:10:45.571      {
00:10:45.571        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:45.571        "dma_device_type": 2
00:10:45.571      },
00:10:45.571      {
00:10:45.571        "dma_device_id": "system",
00:10:45.571        "dma_device_type": 1
00:10:45.571      },
00:10:45.571      {
00:10:45.571        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:45.571        "dma_device_type": 2
00:10:45.571      },
00:10:45.571      {
00:10:45.571        "dma_device_id": "system",
00:10:45.571        "dma_device_type": 1
00:10:45.571      },
00:10:45.571      {
00:10:45.571        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:45.571        "dma_device_type": 2
00:10:45.571      }
00:10:45.571    ],
00:10:45.571    "driver_specific": {
00:10:45.571      "raid": {
00:10:45.571        "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:45.571        "strip_size_kb": 0,
00:10:45.571        "state": "online",
00:10:45.571        "raid_level": "raid1",
00:10:45.571        "superblock": true,
00:10:45.571        "num_base_bdevs": 3,
00:10:45.571        "num_base_bdevs_discovered": 3,
00:10:45.571        "num_base_bdevs_operational": 3,
00:10:45.571        "base_bdevs_list": [
00:10:45.571          {
00:10:45.571            "name": "pt1",
00:10:45.571            "uuid": "00000000-0000-0000-0000-000000000001",
00:10:45.571            "is_configured": true,
00:10:45.571            "data_offset": 2048,
00:10:45.571            "data_size": 63488
00:10:45.571          },
00:10:45.571          {
00:10:45.571            "name": "pt2",
00:10:45.571            "uuid": "00000000-0000-0000-0000-000000000002",
00:10:45.571            "is_configured": true,
00:10:45.571            "data_offset": 2048,
00:10:45.571            "data_size": 63488
00:10:45.572          },
00:10:45.572          {
00:10:45.572            "name": "pt3",
00:10:45.572            "uuid": "00000000-0000-0000-0000-000000000003",
00:10:45.572            "is_configured": true,
00:10:45.572            "data_offset": 2048,
00:10:45.572            "data_size": 63488
00:10:45.572          }
00:10:45.572        ]
00:10:45.572      }
00:10:45.572    }
00:10:45.572  }'
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:10:45.572  pt2
00:10:45.572  pt3'
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:45.572   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.572    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:10:45.572  [2024-12-16 11:32:11.625479] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6427a26b-1ddf-485f-8e31-8b85e06eb664
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6427a26b-1ddf-485f-8e31-8b85e06eb664 ']'
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831  [2024-12-16 11:32:11.669068] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:45.831  [2024-12-16 11:32:11.669096] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:45.831  [2024-12-16 11:32:11.669189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:45.831  [2024-12-16 11:32:11.669266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:45.831  [2024-12-16 11:32:11.669281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831  [2024-12-16 11:32:11.824815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:10:45.831  [2024-12-16 11:32:11.826804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:10:45.831  [2024-12-16 11:32:11.826913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:10:45.831  [2024-12-16 11:32:11.826976] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:10:45.831  [2024-12-16 11:32:11.827026] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:10:45.831  [2024-12-16 11:32:11.827049] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:10:45.831  [2024-12-16 11:32:11.827064] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:45.831  [2024-12-16 11:32:11.827076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:10:45.831  request:
00:10:45.831  {
00:10:45.831  "name": "raid_bdev1",
00:10:45.831  "raid_level": "raid1",
00:10:45.831  "base_bdevs": [
00:10:45.831  "malloc1",
00:10:45.831  "malloc2",
00:10:45.831  "malloc3"
00:10:45.831  ],
00:10:45.831  "superblock": false,
00:10:45.831  "method": "bdev_raid_create",
00:10:45.831  "req_id": 1
00:10:45.831  }
00:10:45.831  Got JSON-RPC error response
00:10:45.831  response:
00:10:45.831  {
00:10:45.831  "code": -17,
00:10:45.831  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:10:45.831  }
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:10:45.831    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:45.831   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:45.831  [2024-12-16 11:32:11.892727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:10:45.831  [2024-12-16 11:32:11.892873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:45.831  [2024-12-16 11:32:11.892931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:10:45.831  [2024-12-16 11:32:11.892965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:45.831  [2024-12-16 11:32:11.895383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:45.831  [2024-12-16 11:32:11.895474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:10:45.831  [2024-12-16 11:32:11.895599] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:10:45.831  [2024-12-16 11:32:11.895684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:10:46.094  pt1
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:46.094    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:46.094    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.094    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.094    11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:46.094    11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:46.094    "name": "raid_bdev1",
00:10:46.094    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:46.094    "strip_size_kb": 0,
00:10:46.094    "state": "configuring",
00:10:46.094    "raid_level": "raid1",
00:10:46.094    "superblock": true,
00:10:46.094    "num_base_bdevs": 3,
00:10:46.094    "num_base_bdevs_discovered": 1,
00:10:46.094    "num_base_bdevs_operational": 3,
00:10:46.094    "base_bdevs_list": [
00:10:46.094      {
00:10:46.094        "name": "pt1",
00:10:46.094        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:46.094        "is_configured": true,
00:10:46.094        "data_offset": 2048,
00:10:46.094        "data_size": 63488
00:10:46.094      },
00:10:46.094      {
00:10:46.094        "name": null,
00:10:46.094        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:46.094        "is_configured": false,
00:10:46.094        "data_offset": 2048,
00:10:46.094        "data_size": 63488
00:10:46.094      },
00:10:46.094      {
00:10:46.094        "name": null,
00:10:46.094        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:46.094        "is_configured": false,
00:10:46.094        "data_offset": 2048,
00:10:46.094        "data_size": 63488
00:10:46.094      }
00:10:46.094    ]
00:10:46.094  }'
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:46.094   11:32:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']'
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.354  [2024-12-16 11:32:12.375931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:10:46.354  [2024-12-16 11:32:12.376013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:46.354  [2024-12-16 11:32:12.376035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:10:46.354  [2024-12-16 11:32:12.376049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:46.354  [2024-12-16 11:32:12.376483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:46.354  [2024-12-16 11:32:12.376503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:10:46.354  [2024-12-16 11:32:12.376594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:10:46.354  [2024-12-16 11:32:12.376620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:46.354  pt2
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.354  [2024-12-16 11:32:12.383915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:46.354   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:46.354    11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:46.354    11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.354    11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.354    11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:46.354    11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.612   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:46.612    "name": "raid_bdev1",
00:10:46.612    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:46.612    "strip_size_kb": 0,
00:10:46.612    "state": "configuring",
00:10:46.612    "raid_level": "raid1",
00:10:46.612    "superblock": true,
00:10:46.612    "num_base_bdevs": 3,
00:10:46.612    "num_base_bdevs_discovered": 1,
00:10:46.612    "num_base_bdevs_operational": 3,
00:10:46.612    "base_bdevs_list": [
00:10:46.612      {
00:10:46.612        "name": "pt1",
00:10:46.612        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:46.612        "is_configured": true,
00:10:46.612        "data_offset": 2048,
00:10:46.612        "data_size": 63488
00:10:46.612      },
00:10:46.612      {
00:10:46.612        "name": null,
00:10:46.612        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:46.612        "is_configured": false,
00:10:46.612        "data_offset": 0,
00:10:46.612        "data_size": 63488
00:10:46.612      },
00:10:46.612      {
00:10:46.612        "name": null,
00:10:46.612        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:46.612        "is_configured": false,
00:10:46.612        "data_offset": 2048,
00:10:46.612        "data_size": 63488
00:10:46.612      }
00:10:46.612    ]
00:10:46.612  }'
00:10:46.612   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:46.612   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.871  [2024-12-16 11:32:12.859127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:10:46.871  [2024-12-16 11:32:12.859205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:46.871  [2024-12-16 11:32:12.859227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:10:46.871  [2024-12-16 11:32:12.859242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:46.871  [2024-12-16 11:32:12.859698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:46.871  [2024-12-16 11:32:12.859719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:10:46.871  [2024-12-16 11:32:12.859799] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:10:46.871  [2024-12-16 11:32:12.859829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:46.871  pt2
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.871  [2024-12-16 11:32:12.871063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:10:46.871  [2024-12-16 11:32:12.871125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:46.871  [2024-12-16 11:32:12.871159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:10:46.871  [2024-12-16 11:32:12.871167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:46.871  [2024-12-16 11:32:12.871505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:46.871  [2024-12-16 11:32:12.871522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:10:46.871  [2024-12-16 11:32:12.871598] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:10:46.871  [2024-12-16 11:32:12.871618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:10:46.871  [2024-12-16 11:32:12.871714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:10:46.871  [2024-12-16 11:32:12.871722] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:46.871  [2024-12-16 11:32:12.871946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:10:46.871  [2024-12-16 11:32:12.872063] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:10:46.871  [2024-12-16 11:32:12.872075] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:10:46.871  [2024-12-16 11:32:12.872176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:46.871  pt3
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:46.871    11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:46.871    11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:46.871    11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.871    11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:46.871    11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:46.871    "name": "raid_bdev1",
00:10:46.871    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:46.871    "strip_size_kb": 0,
00:10:46.871    "state": "online",
00:10:46.871    "raid_level": "raid1",
00:10:46.871    "superblock": true,
00:10:46.871    "num_base_bdevs": 3,
00:10:46.871    "num_base_bdevs_discovered": 3,
00:10:46.871    "num_base_bdevs_operational": 3,
00:10:46.871    "base_bdevs_list": [
00:10:46.871      {
00:10:46.871        "name": "pt1",
00:10:46.871        "uuid": "00000000-0000-0000-0000-000000000001",
00:10:46.871        "is_configured": true,
00:10:46.871        "data_offset": 2048,
00:10:46.871        "data_size": 63488
00:10:46.871      },
00:10:46.871      {
00:10:46.871        "name": "pt2",
00:10:46.871        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:46.871        "is_configured": true,
00:10:46.871        "data_offset": 2048,
00:10:46.871        "data_size": 63488
00:10:46.871      },
00:10:46.871      {
00:10:46.871        "name": "pt3",
00:10:46.871        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:46.871        "is_configured": true,
00:10:46.871        "data_offset": 2048,
00:10:46.871        "data_size": 63488
00:10:46.871      }
00:10:46.871    ]
00:10:46.871  }'
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:46.871   11:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.437   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:10:47.437   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:10:47.437   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:10:47.437   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:10:47.437   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:10:47.437   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.438  [2024-12-16 11:32:13.314648] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:47.438   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:10:47.438    "name": "raid_bdev1",
00:10:47.438    "aliases": [
00:10:47.438      "6427a26b-1ddf-485f-8e31-8b85e06eb664"
00:10:47.438    ],
00:10:47.438    "product_name": "Raid Volume",
00:10:47.438    "block_size": 512,
00:10:47.438    "num_blocks": 63488,
00:10:47.438    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:47.438    "assigned_rate_limits": {
00:10:47.438      "rw_ios_per_sec": 0,
00:10:47.438      "rw_mbytes_per_sec": 0,
00:10:47.438      "r_mbytes_per_sec": 0,
00:10:47.438      "w_mbytes_per_sec": 0
00:10:47.438    },
00:10:47.438    "claimed": false,
00:10:47.438    "zoned": false,
00:10:47.438    "supported_io_types": {
00:10:47.438      "read": true,
00:10:47.438      "write": true,
00:10:47.438      "unmap": false,
00:10:47.438      "flush": false,
00:10:47.438      "reset": true,
00:10:47.438      "nvme_admin": false,
00:10:47.438      "nvme_io": false,
00:10:47.438      "nvme_io_md": false,
00:10:47.438      "write_zeroes": true,
00:10:47.438      "zcopy": false,
00:10:47.438      "get_zone_info": false,
00:10:47.438      "zone_management": false,
00:10:47.438      "zone_append": false,
00:10:47.438      "compare": false,
00:10:47.438      "compare_and_write": false,
00:10:47.438      "abort": false,
00:10:47.438      "seek_hole": false,
00:10:47.438      "seek_data": false,
00:10:47.438      "copy": false,
00:10:47.438      "nvme_iov_md": false
00:10:47.438    },
00:10:47.438    "memory_domains": [
00:10:47.438      {
00:10:47.438        "dma_device_id": "system",
00:10:47.438        "dma_device_type": 1
00:10:47.438      },
00:10:47.438      {
00:10:47.438        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:47.438        "dma_device_type": 2
00:10:47.438      },
00:10:47.438      {
00:10:47.438        "dma_device_id": "system",
00:10:47.438        "dma_device_type": 1
00:10:47.438      },
00:10:47.438      {
00:10:47.438        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:47.438        "dma_device_type": 2
00:10:47.438      },
00:10:47.438      {
00:10:47.438        "dma_device_id": "system",
00:10:47.438        "dma_device_type": 1
00:10:47.438      },
00:10:47.438      {
00:10:47.438        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:47.438        "dma_device_type": 2
00:10:47.438      }
00:10:47.438    ],
00:10:47.438    "driver_specific": {
00:10:47.438      "raid": {
00:10:47.438        "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:47.438        "strip_size_kb": 0,
00:10:47.438        "state": "online",
00:10:47.438        "raid_level": "raid1",
00:10:47.438        "superblock": true,
00:10:47.438        "num_base_bdevs": 3,
00:10:47.438        "num_base_bdevs_discovered": 3,
00:10:47.438        "num_base_bdevs_operational": 3,
00:10:47.438        "base_bdevs_list": [
00:10:47.438          {
00:10:47.438            "name": "pt1",
00:10:47.438            "uuid": "00000000-0000-0000-0000-000000000001",
00:10:47.438            "is_configured": true,
00:10:47.438            "data_offset": 2048,
00:10:47.438            "data_size": 63488
00:10:47.438          },
00:10:47.438          {
00:10:47.438            "name": "pt2",
00:10:47.438            "uuid": "00000000-0000-0000-0000-000000000002",
00:10:47.438            "is_configured": true,
00:10:47.438            "data_offset": 2048,
00:10:47.438            "data_size": 63488
00:10:47.438          },
00:10:47.438          {
00:10:47.438            "name": "pt3",
00:10:47.438            "uuid": "00000000-0000-0000-0000-000000000003",
00:10:47.438            "is_configured": true,
00:10:47.438            "data_offset": 2048,
00:10:47.438            "data_size": 63488
00:10:47.438          }
00:10:47.438        ]
00:10:47.438      }
00:10:47.438    }
00:10:47.438  }'
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:10:47.438   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:10:47.438  pt2
00:10:47.438  pt3'
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:47.438   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:10:47.438   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:47.438   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:47.438   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:47.438   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:47.438    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.697  [2024-12-16 11:32:13.574191] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6427a26b-1ddf-485f-8e31-8b85e06eb664 '!=' 6427a26b-1ddf-485f-8e31-8b85e06eb664 ']'
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.697  [2024-12-16 11:32:13.625865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:47.697    11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:47.697    "name": "raid_bdev1",
00:10:47.697    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:47.697    "strip_size_kb": 0,
00:10:47.697    "state": "online",
00:10:47.697    "raid_level": "raid1",
00:10:47.697    "superblock": true,
00:10:47.697    "num_base_bdevs": 3,
00:10:47.697    "num_base_bdevs_discovered": 2,
00:10:47.697    "num_base_bdevs_operational": 2,
00:10:47.697    "base_bdevs_list": [
00:10:47.697      {
00:10:47.697        "name": null,
00:10:47.697        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:47.697        "is_configured": false,
00:10:47.697        "data_offset": 0,
00:10:47.697        "data_size": 63488
00:10:47.697      },
00:10:47.697      {
00:10:47.697        "name": "pt2",
00:10:47.697        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:47.697        "is_configured": true,
00:10:47.697        "data_offset": 2048,
00:10:47.697        "data_size": 63488
00:10:47.697      },
00:10:47.697      {
00:10:47.697        "name": "pt3",
00:10:47.697        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:47.697        "is_configured": true,
00:10:47.697        "data_offset": 2048,
00:10:47.697        "data_size": 63488
00:10:47.697      }
00:10:47.697    ]
00:10:47.697  }'
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:47.697   11:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.265  [2024-12-16 11:32:14.057081] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:48.265  [2024-12-16 11:32:14.057121] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:48.265  [2024-12-16 11:32:14.057207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:48.265  [2024-12-16 11:32:14.057269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:48.265  [2024-12-16 11:32:14.057278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.265    11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:48.265    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.265    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.265    11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:10:48.265    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.265  [2024-12-16 11:32:14.136956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:10:48.265  [2024-12-16 11:32:14.137097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:48.265  [2024-12-16 11:32:14.137123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:10:48.265  [2024-12-16 11:32:14.137134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:48.265  [2024-12-16 11:32:14.139649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:48.265  [2024-12-16 11:32:14.139693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:10:48.265  [2024-12-16 11:32:14.139776] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:10:48.265  [2024-12-16 11:32:14.139810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:48.265  pt2
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:48.265   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:48.266   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:48.266   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:48.266    11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:48.266    11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:48.266    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.266    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.266    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.266   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:48.266    "name": "raid_bdev1",
00:10:48.266    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:48.266    "strip_size_kb": 0,
00:10:48.266    "state": "configuring",
00:10:48.266    "raid_level": "raid1",
00:10:48.266    "superblock": true,
00:10:48.266    "num_base_bdevs": 3,
00:10:48.266    "num_base_bdevs_discovered": 1,
00:10:48.266    "num_base_bdevs_operational": 2,
00:10:48.266    "base_bdevs_list": [
00:10:48.266      {
00:10:48.266        "name": null,
00:10:48.266        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:48.266        "is_configured": false,
00:10:48.266        "data_offset": 2048,
00:10:48.266        "data_size": 63488
00:10:48.266      },
00:10:48.266      {
00:10:48.266        "name": "pt2",
00:10:48.266        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:48.266        "is_configured": true,
00:10:48.266        "data_offset": 2048,
00:10:48.266        "data_size": 63488
00:10:48.266      },
00:10:48.266      {
00:10:48.266        "name": null,
00:10:48.266        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:48.266        "is_configured": false,
00:10:48.266        "data_offset": 2048,
00:10:48.266        "data_size": 63488
00:10:48.266      }
00:10:48.266    ]
00:10:48.266  }'
00:10:48.266   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:48.266   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ ))
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.525  [2024-12-16 11:32:14.576288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:10:48.525  [2024-12-16 11:32:14.576443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:48.525  [2024-12-16 11:32:14.576489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:10:48.525  [2024-12-16 11:32:14.576523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:48.525  [2024-12-16 11:32:14.576985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:48.525  [2024-12-16 11:32:14.577055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:10:48.525  [2024-12-16 11:32:14.577171] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:10:48.525  [2024-12-16 11:32:14.577224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:10:48.525  [2024-12-16 11:32:14.577350] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:48.525  [2024-12-16 11:32:14.577389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:48.525  [2024-12-16 11:32:14.577705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:10:48.525  [2024-12-16 11:32:14.577877] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:48.525  [2024-12-16 11:32:14.577922] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:10:48.525  [2024-12-16 11:32:14.578079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:48.525  pt3
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:48.525   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:48.525    11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:48.783    11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:48.783    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:48.783    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:48.783    11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:48.783   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:48.783    "name": "raid_bdev1",
00:10:48.783    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:48.783    "strip_size_kb": 0,
00:10:48.783    "state": "online",
00:10:48.783    "raid_level": "raid1",
00:10:48.783    "superblock": true,
00:10:48.783    "num_base_bdevs": 3,
00:10:48.783    "num_base_bdevs_discovered": 2,
00:10:48.783    "num_base_bdevs_operational": 2,
00:10:48.783    "base_bdevs_list": [
00:10:48.783      {
00:10:48.783        "name": null,
00:10:48.783        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:48.783        "is_configured": false,
00:10:48.783        "data_offset": 2048,
00:10:48.783        "data_size": 63488
00:10:48.783      },
00:10:48.783      {
00:10:48.783        "name": "pt2",
00:10:48.783        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:48.783        "is_configured": true,
00:10:48.783        "data_offset": 2048,
00:10:48.783        "data_size": 63488
00:10:48.783      },
00:10:48.783      {
00:10:48.783        "name": "pt3",
00:10:48.783        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:48.783        "is_configured": true,
00:10:48.783        "data_offset": 2048,
00:10:48.783        "data_size": 63488
00:10:48.783      }
00:10:48.783    ]
00:10:48.783  }'
00:10:48.783   11:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:48.783   11:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.042   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:49.042   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.042   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.042  [2024-12-16 11:32:15.079388] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:49.042  [2024-12-16 11:32:15.079422] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:49.042  [2024-12-16 11:32:15.079502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:49.042  [2024-12-16 11:32:15.079577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:49.042  [2024-12-16 11:32:15.079590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:10:49.042   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.042    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:49.042    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:10:49.042    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.042    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.042    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']'
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.299  [2024-12-16 11:32:15.151300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:10:49.299  [2024-12-16 11:32:15.151371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:49.299  [2024-12-16 11:32:15.151389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:10:49.299  [2024-12-16 11:32:15.151401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:49.299  [2024-12-16 11:32:15.153809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:49.299  [2024-12-16 11:32:15.153853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:10:49.299  [2024-12-16 11:32:15.153931] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:10:49.299  [2024-12-16 11:32:15.153975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:10:49.299  [2024-12-16 11:32:15.154083] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:10:49.299  [2024-12-16 11:32:15.154107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:49.299  [2024-12-16 11:32:15.154124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:10:49.299  [2024-12-16 11:32:15.154165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:10:49.299  pt1
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']'
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:49.299   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:49.300   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:49.300   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:49.300    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:49.300    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:49.300    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.300    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.300    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.300   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:49.300    "name": "raid_bdev1",
00:10:49.300    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:49.300    "strip_size_kb": 0,
00:10:49.300    "state": "configuring",
00:10:49.300    "raid_level": "raid1",
00:10:49.300    "superblock": true,
00:10:49.300    "num_base_bdevs": 3,
00:10:49.300    "num_base_bdevs_discovered": 1,
00:10:49.300    "num_base_bdevs_operational": 2,
00:10:49.300    "base_bdevs_list": [
00:10:49.300      {
00:10:49.300        "name": null,
00:10:49.300        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:49.300        "is_configured": false,
00:10:49.300        "data_offset": 2048,
00:10:49.300        "data_size": 63488
00:10:49.300      },
00:10:49.300      {
00:10:49.300        "name": "pt2",
00:10:49.300        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:49.300        "is_configured": true,
00:10:49.300        "data_offset": 2048,
00:10:49.300        "data_size": 63488
00:10:49.300      },
00:10:49.300      {
00:10:49.300        "name": null,
00:10:49.300        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:49.300        "is_configured": false,
00:10:49.300        "data_offset": 2048,
00:10:49.300        "data_size": 63488
00:10:49.300      }
00:10:49.300    ]
00:10:49.300  }'
00:10:49.300   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:49.300   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]]
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.866  [2024-12-16 11:32:15.678366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:10:49.866  [2024-12-16 11:32:15.678488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:49.866  [2024-12-16 11:32:15.678546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:10:49.866  [2024-12-16 11:32:15.678582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:49.866  [2024-12-16 11:32:15.678998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:49.866  [2024-12-16 11:32:15.679060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:10:49.866  [2024-12-16 11:32:15.679164] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:10:49.866  [2024-12-16 11:32:15.679247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:10:49.866  [2024-12-16 11:32:15.679381] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:10:49.866  [2024-12-16 11:32:15.679420] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:49.866  [2024-12-16 11:32:15.679667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:10:49.866  [2024-12-16 11:32:15.679834] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:10:49.866  [2024-12-16 11:32:15.679874] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:10:49.866  [2024-12-16 11:32:15.680019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:49.866  pt3
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:49.866    11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:49.866   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:49.866    "name": "raid_bdev1",
00:10:49.866    "uuid": "6427a26b-1ddf-485f-8e31-8b85e06eb664",
00:10:49.866    "strip_size_kb": 0,
00:10:49.866    "state": "online",
00:10:49.866    "raid_level": "raid1",
00:10:49.866    "superblock": true,
00:10:49.866    "num_base_bdevs": 3,
00:10:49.866    "num_base_bdevs_discovered": 2,
00:10:49.866    "num_base_bdevs_operational": 2,
00:10:49.866    "base_bdevs_list": [
00:10:49.866      {
00:10:49.866        "name": null,
00:10:49.866        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:49.866        "is_configured": false,
00:10:49.866        "data_offset": 2048,
00:10:49.867        "data_size": 63488
00:10:49.867      },
00:10:49.867      {
00:10:49.867        "name": "pt2",
00:10:49.867        "uuid": "00000000-0000-0000-0000-000000000002",
00:10:49.867        "is_configured": true,
00:10:49.867        "data_offset": 2048,
00:10:49.867        "data_size": 63488
00:10:49.867      },
00:10:49.867      {
00:10:49.867        "name": "pt3",
00:10:49.867        "uuid": "00000000-0000-0000-0000-000000000003",
00:10:49.867        "is_configured": true,
00:10:49.867        "data_offset": 2048,
00:10:49.867        "data_size": 63488
00:10:49.867      }
00:10:49.867    ]
00:10:49.867  }'
00:10:49.867   11:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:49.867   11:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:50.125   11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:50.125  [2024-12-16 11:32:16.153904] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:10:50.125    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6427a26b-1ddf-485f-8e31-8b85e06eb664 '!=' 6427a26b-1ddf-485f-8e31-8b85e06eb664 ']'
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79973
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79973 ']'
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79973
00:10:50.383    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:50.383    11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79973
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:50.383  killing process with pid 79973
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79973'
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79973
00:10:50.383  [2024-12-16 11:32:16.237497] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:50.383  [2024-12-16 11:32:16.237607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:50.383  [2024-12-16 11:32:16.237670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:50.383  [2024-12-16 11:32:16.237680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:10:50.383   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79973
00:10:50.383  [2024-12-16 11:32:16.272437] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:50.644   11:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:10:50.644  
00:10:50.645  real	0m6.673s
00:10:50.645  user	0m11.178s
00:10:50.645  sys	0m1.462s
00:10:50.645   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:50.645   11:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:10:50.645  ************************************
00:10:50.645  END TEST raid_superblock_test
00:10:50.645  ************************************
00:10:50.645   11:32:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read
00:10:50.645   11:32:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:50.645   11:32:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:50.645   11:32:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:50.645  ************************************
00:10:50.645  START TEST raid_read_error_test
00:10:50.645  ************************************
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']'
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0
00:10:50.645    11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xS6b7HcEHd
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80408
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80408
00:10:50.645  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80408 ']'
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:50.645   11:32:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:50.645  [2024-12-16 11:32:16.702329] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:50.645  [2024-12-16 11:32:16.702479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80408 ]
00:10:50.937  [2024-12-16 11:32:16.857211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:50.937  [2024-12-16 11:32:16.905091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:50.937  [2024-12-16 11:32:16.948857] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:50.937  [2024-12-16 11:32:16.948980] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  BaseBdev1_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  true
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  [2024-12-16 11:32:17.627728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:10:51.873  [2024-12-16 11:32:17.627878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:51.873  [2024-12-16 11:32:17.627905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:10:51.873  [2024-12-16 11:32:17.627915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:51.873  [2024-12-16 11:32:17.630323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:51.873  [2024-12-16 11:32:17.630373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:10:51.873  BaseBdev1
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  BaseBdev2_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  true
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  [2024-12-16 11:32:17.680196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:10:51.873  [2024-12-16 11:32:17.680267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:51.873  [2024-12-16 11:32:17.680291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:10:51.873  [2024-12-16 11:32:17.680301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:51.873  [2024-12-16 11:32:17.682681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:51.873  [2024-12-16 11:32:17.682799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:10:51.873  BaseBdev2
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  BaseBdev3_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  true
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  [2024-12-16 11:32:17.721079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:10:51.873  [2024-12-16 11:32:17.721142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:51.873  [2024-12-16 11:32:17.721164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:10:51.873  [2024-12-16 11:32:17.721173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:51.873  [2024-12-16 11:32:17.723345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:51.873  [2024-12-16 11:32:17.723390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:10:51.873  BaseBdev3
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873  [2024-12-16 11:32:17.733114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:51.873  [2024-12-16 11:32:17.735030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:51.873  [2024-12-16 11:32:17.735117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:51.873  [2024-12-16 11:32:17.735308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:51.873  [2024-12-16 11:32:17.735325] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:51.873  [2024-12-16 11:32:17.735586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:51.873  [2024-12-16 11:32:17.735763] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:51.873  [2024-12-16 11:32:17.735774] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:10:51.873  [2024-12-16 11:32:17.735931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:51.873    11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:51.873    11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:51.873    11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:51.873    11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:51.873    11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:51.873   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:51.873    "name": "raid_bdev1",
00:10:51.873    "uuid": "4326ee85-8d51-4545-826b-f418febaacbb",
00:10:51.873    "strip_size_kb": 0,
00:10:51.873    "state": "online",
00:10:51.873    "raid_level": "raid1",
00:10:51.873    "superblock": true,
00:10:51.873    "num_base_bdevs": 3,
00:10:51.873    "num_base_bdevs_discovered": 3,
00:10:51.873    "num_base_bdevs_operational": 3,
00:10:51.873    "base_bdevs_list": [
00:10:51.873      {
00:10:51.873        "name": "BaseBdev1",
00:10:51.873        "uuid": "279490e6-7b8d-5192-a949-8a305778e8c7",
00:10:51.873        "is_configured": true,
00:10:51.873        "data_offset": 2048,
00:10:51.873        "data_size": 63488
00:10:51.873      },
00:10:51.873      {
00:10:51.873        "name": "BaseBdev2",
00:10:51.874        "uuid": "0b7c139d-87eb-5fc9-a2e7-56cb8084e988",
00:10:51.874        "is_configured": true,
00:10:51.874        "data_offset": 2048,
00:10:51.874        "data_size": 63488
00:10:51.874      },
00:10:51.874      {
00:10:51.874        "name": "BaseBdev3",
00:10:51.874        "uuid": "db745f39-154b-50ca-ac99-55311c60fc39",
00:10:51.874        "is_configured": true,
00:10:51.874        "data_offset": 2048,
00:10:51.874        "data_size": 63488
00:10:51.874      }
00:10:51.874    ]
00:10:51.874  }'
00:10:51.874   11:32:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:51.874   11:32:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:52.441   11:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:10:52.441   11:32:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:10:52.441  [2024-12-16 11:32:18.312532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]]
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]]
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:53.375    11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:53.375    11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:53.375    11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:53.375    11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:53.375    11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:53.375   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:53.375    "name": "raid_bdev1",
00:10:53.375    "uuid": "4326ee85-8d51-4545-826b-f418febaacbb",
00:10:53.375    "strip_size_kb": 0,
00:10:53.375    "state": "online",
00:10:53.375    "raid_level": "raid1",
00:10:53.375    "superblock": true,
00:10:53.375    "num_base_bdevs": 3,
00:10:53.375    "num_base_bdevs_discovered": 3,
00:10:53.375    "num_base_bdevs_operational": 3,
00:10:53.375    "base_bdevs_list": [
00:10:53.375      {
00:10:53.375        "name": "BaseBdev1",
00:10:53.376        "uuid": "279490e6-7b8d-5192-a949-8a305778e8c7",
00:10:53.376        "is_configured": true,
00:10:53.376        "data_offset": 2048,
00:10:53.376        "data_size": 63488
00:10:53.376      },
00:10:53.376      {
00:10:53.376        "name": "BaseBdev2",
00:10:53.376        "uuid": "0b7c139d-87eb-5fc9-a2e7-56cb8084e988",
00:10:53.376        "is_configured": true,
00:10:53.376        "data_offset": 2048,
00:10:53.376        "data_size": 63488
00:10:53.376      },
00:10:53.376      {
00:10:53.376        "name": "BaseBdev3",
00:10:53.376        "uuid": "db745f39-154b-50ca-ac99-55311c60fc39",
00:10:53.376        "is_configured": true,
00:10:53.376        "data_offset": 2048,
00:10:53.376        "data_size": 63488
00:10:53.376      }
00:10:53.376    ]
00:10:53.376  }'
00:10:53.376   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:53.376   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:53.634   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:53.634   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:53.634   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:53.634  [2024-12-16 11:32:19.687296] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:53.634  [2024-12-16 11:32:19.687422] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:53.634  [2024-12-16 11:32:19.690025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:53.634  [2024-12-16 11:32:19.690105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:53.634  [2024-12-16 11:32:19.690220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:53.634  [2024-12-16 11:32:19.690294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:10:53.634   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:53.634  {
00:10:53.634    "results": [
00:10:53.634      {
00:10:53.634        "job": "raid_bdev1",
00:10:53.634        "core_mask": "0x1",
00:10:53.634        "workload": "randrw",
00:10:53.634        "percentage": 50,
00:10:53.634        "status": "finished",
00:10:53.634        "queue_depth": 1,
00:10:53.634        "io_size": 131072,
00:10:53.634        "runtime": 1.375517,
00:10:53.634        "iops": 13769.36817211274,
00:10:53.634        "mibps": 1721.1710215140924,
00:10:53.634        "io_failed": 0,
00:10:53.634        "io_timeout": 0,
00:10:53.634        "avg_latency_us": 69.92327063630034,
00:10:53.634        "min_latency_us": 22.69344978165939,
00:10:53.634        "max_latency_us": 1509.6174672489083
00:10:53.634      }
00:10:53.634    ],
00:10:53.634    "core_count": 1
00:10:53.634  }
00:10:53.634   11:32:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80408
00:10:53.634   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80408 ']'
00:10:53.634   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80408
00:10:53.634    11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:10:53.892   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:53.892    11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80408
00:10:53.892  killing process with pid 80408
00:10:53.892   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:53.892   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:53.892   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80408'
00:10:53.892   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80408
00:10:53.892  [2024-12-16 11:32:19.742060] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:53.893   11:32:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80408
00:10:53.893  [2024-12-16 11:32:19.767604] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:54.151    11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:10:54.151    11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xS6b7HcEHd
00:10:54.151    11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:10:54.151   11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00
00:10:54.151   11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1
00:10:54.151   11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:54.151   11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0
00:10:54.151   11:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]]
00:10:54.151  
00:10:54.151  real	0m3.416s
00:10:54.151  user	0m4.387s
00:10:54.151  sys	0m0.563s
00:10:54.151  ************************************
00:10:54.151  END TEST raid_read_error_test
00:10:54.151  ************************************
00:10:54.151   11:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:54.151   11:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:54.151   11:32:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write
00:10:54.151   11:32:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:54.151   11:32:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:54.151   11:32:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:54.151  ************************************
00:10:54.151  START TEST raid_write_error_test
00:10:54.151  ************************************
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']'
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0
00:10:54.151    11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7RXFAPfRkk
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80537
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80537
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80537 ']'
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:54.151  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:54.151   11:32:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:54.151  [2024-12-16 11:32:20.195390] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:54.151  [2024-12-16 11:32:20.195608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80537 ]
00:10:54.410  [2024-12-16 11:32:20.351251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:54.411  [2024-12-16 11:32:20.397656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:54.411  [2024-12-16 11:32:20.439594] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:54.411  [2024-12-16 11:32:20.439628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.367  BaseBdev1_malloc
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.367  true
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.367  [2024-12-16 11:32:21.085816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:10:55.367  [2024-12-16 11:32:21.085898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:55.367  [2024-12-16 11:32:21.085917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:10:55.367  [2024-12-16 11:32:21.085932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:55.367  [2024-12-16 11:32:21.087991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:55.367  [2024-12-16 11:32:21.088029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:10:55.367  BaseBdev1
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.367  BaseBdev2_malloc
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.367  true
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.367   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.367  [2024-12-16 11:32:21.136222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:10:55.367  [2024-12-16 11:32:21.136360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:55.367  [2024-12-16 11:32:21.136382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:10:55.367  [2024-12-16 11:32:21.136392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:55.367  [2024-12-16 11:32:21.138457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:55.368  [2024-12-16 11:32:21.138506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:10:55.368  BaseBdev2
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.368  BaseBdev3_malloc
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.368  true
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.368  [2024-12-16 11:32:21.176751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:10:55.368  [2024-12-16 11:32:21.176805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:55.368  [2024-12-16 11:32:21.176839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:10:55.368  [2024-12-16 11:32:21.176847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:55.368  [2024-12-16 11:32:21.178908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:55.368  [2024-12-16 11:32:21.178999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:10:55.368  BaseBdev3
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.368  [2024-12-16 11:32:21.188794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:55.368  [2024-12-16 11:32:21.190646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:10:55.368  [2024-12-16 11:32:21.190726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:10:55.368  [2024-12-16 11:32:21.190899] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:10:55.368  [2024-12-16 11:32:21.190916] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:10:55.368  [2024-12-16 11:32:21.191147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:10:55.368  [2024-12-16 11:32:21.191322] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:10:55.368  [2024-12-16 11:32:21.191333] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:10:55.368  [2024-12-16 11:32:21.191460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:55.368    11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:55.368    11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:55.368    11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:55.368    11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.368    11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:55.368    "name": "raid_bdev1",
00:10:55.368    "uuid": "bfb82721-1a06-41f1-a435-171689826f65",
00:10:55.368    "strip_size_kb": 0,
00:10:55.368    "state": "online",
00:10:55.368    "raid_level": "raid1",
00:10:55.368    "superblock": true,
00:10:55.368    "num_base_bdevs": 3,
00:10:55.368    "num_base_bdevs_discovered": 3,
00:10:55.368    "num_base_bdevs_operational": 3,
00:10:55.368    "base_bdevs_list": [
00:10:55.368      {
00:10:55.368        "name": "BaseBdev1",
00:10:55.368        "uuid": "69a769a3-6c67-54d3-ba6a-9b00b1615c73",
00:10:55.368        "is_configured": true,
00:10:55.368        "data_offset": 2048,
00:10:55.368        "data_size": 63488
00:10:55.368      },
00:10:55.368      {
00:10:55.368        "name": "BaseBdev2",
00:10:55.368        "uuid": "8f01c3ee-d198-55e6-aab8-dbe65d81f7d9",
00:10:55.368        "is_configured": true,
00:10:55.368        "data_offset": 2048,
00:10:55.368        "data_size": 63488
00:10:55.368      },
00:10:55.368      {
00:10:55.368        "name": "BaseBdev3",
00:10:55.368        "uuid": "c97630ec-28ee-5dee-9f36-c878f2e9a9f7",
00:10:55.368        "is_configured": true,
00:10:55.368        "data_offset": 2048,
00:10:55.368        "data_size": 63488
00:10:55.368      }
00:10:55.368    ]
00:10:55.368  }'
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:55.368   11:32:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:55.667   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:10:55.667   11:32:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:10:55.927  [2024-12-16 11:32:21.748255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:56.863  [2024-12-16 11:32:22.672426] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1'
00:10:56.863  [2024-12-16 11:32:22.672625] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:10:56.863  [2024-12-16 11:32:22.672925] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]]
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]]
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:56.863    11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:56.863    11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:10:56.863    11:32:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:56.863    11:32:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:56.863    11:32:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:56.863    "name": "raid_bdev1",
00:10:56.863    "uuid": "bfb82721-1a06-41f1-a435-171689826f65",
00:10:56.863    "strip_size_kb": 0,
00:10:56.863    "state": "online",
00:10:56.863    "raid_level": "raid1",
00:10:56.863    "superblock": true,
00:10:56.863    "num_base_bdevs": 3,
00:10:56.863    "num_base_bdevs_discovered": 2,
00:10:56.863    "num_base_bdevs_operational": 2,
00:10:56.863    "base_bdevs_list": [
00:10:56.863      {
00:10:56.863        "name": null,
00:10:56.863        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:56.863        "is_configured": false,
00:10:56.863        "data_offset": 0,
00:10:56.863        "data_size": 63488
00:10:56.863      },
00:10:56.863      {
00:10:56.863        "name": "BaseBdev2",
00:10:56.863        "uuid": "8f01c3ee-d198-55e6-aab8-dbe65d81f7d9",
00:10:56.863        "is_configured": true,
00:10:56.863        "data_offset": 2048,
00:10:56.863        "data_size": 63488
00:10:56.863      },
00:10:56.863      {
00:10:56.863        "name": "BaseBdev3",
00:10:56.863        "uuid": "c97630ec-28ee-5dee-9f36-c878f2e9a9f7",
00:10:56.863        "is_configured": true,
00:10:56.863        "data_offset": 2048,
00:10:56.863        "data_size": 63488
00:10:56.863      }
00:10:56.863    ]
00:10:56.863  }'
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:56.863   11:32:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:57.121   11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:10:57.121   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:57.121   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:57.121  [2024-12-16 11:32:23.170713] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:10:57.121  [2024-12-16 11:32:23.170859] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:10:57.121  [2024-12-16 11:32:23.173703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:10:57.121  [2024-12-16 11:32:23.173748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:10:57.121  [2024-12-16 11:32:23.173829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:10:57.121  [2024-12-16 11:32:23.173840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:10:57.121  {
00:10:57.121    "results": [
00:10:57.121      {
00:10:57.121        "job": "raid_bdev1",
00:10:57.121        "core_mask": "0x1",
00:10:57.121        "workload": "randrw",
00:10:57.122        "percentage": 50,
00:10:57.122        "status": "finished",
00:10:57.122        "queue_depth": 1,
00:10:57.122        "io_size": 131072,
00:10:57.122        "runtime": 1.423387,
00:10:57.122        "iops": 15231.97837271241,
00:10:57.122        "mibps": 1903.9972965890513,
00:10:57.122        "io_failed": 0,
00:10:57.122        "io_timeout": 0,
00:10:57.122        "avg_latency_us": 62.92364944735584,
00:10:57.122        "min_latency_us": 23.699563318777294,
00:10:57.122        "max_latency_us": 1495.3082969432314
00:10:57.122      }
00:10:57.122    ],
00:10:57.122    "core_count": 1
00:10:57.122  }
00:10:57.122   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:57.122   11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80537
00:10:57.122   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80537 ']'
00:10:57.122   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80537
00:10:57.122    11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:10:57.122   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:10:57.122    11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80537
00:10:57.380  killing process with pid 80537
00:10:57.380   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:10:57.380   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:10:57.380   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80537'
00:10:57.380   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80537
00:10:57.380  [2024-12-16 11:32:23.221282] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:10:57.380   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80537
00:10:57.380  [2024-12-16 11:32:23.247788] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:10:57.639    11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7RXFAPfRkk
00:10:57.639    11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:10:57.639    11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:10:57.639  ************************************
00:10:57.639  END TEST raid_write_error_test
00:10:57.639  ************************************
00:10:57.639   11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00
00:10:57.639   11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1
00:10:57.639   11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:10:57.639   11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0
00:10:57.639   11:32:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]]
00:10:57.639  
00:10:57.639  real	0m3.414s
00:10:57.639  user	0m4.346s
00:10:57.639  sys	0m0.574s
00:10:57.639   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:10:57.639   11:32:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:10:57.639   11:32:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4}
00:10:57.639   11:32:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:10:57.639   11:32:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false
00:10:57.639   11:32:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:10:57.639   11:32:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:10:57.639   11:32:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:10:57.639  ************************************
00:10:57.639  START TEST raid_state_function_test
00:10:57.639  ************************************
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:10:57.639    11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']'
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80670
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80670'
00:10:57.639  Process raid pid: 80670
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80670
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80670 ']'
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:57.639  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:10:57.639   11:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:57.639  [2024-12-16 11:32:23.671587] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:10:57.639  [2024-12-16 11:32:23.671831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:57.898  [2024-12-16 11:32:23.833208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:57.898  [2024-12-16 11:32:23.882732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:10:57.898  [2024-12-16 11:32:23.926364] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:57.898  [2024-12-16 11:32:23.926502] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:58.836  [2024-12-16 11:32:24.552052] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:58.836  [2024-12-16 11:32:24.552118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:58.836  [2024-12-16 11:32:24.552139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:58.836  [2024-12-16 11:32:24.552150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:58.836  [2024-12-16 11:32:24.552156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:58.836  [2024-12-16 11:32:24.552169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:58.836  [2024-12-16 11:32:24.552176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:10:58.836  [2024-12-16 11:32:24.552184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:58.836    11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:58.836    11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:58.836    11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:58.836    11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:58.836    11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:58.836    "name": "Existed_Raid",
00:10:58.836    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:58.836    "strip_size_kb": 64,
00:10:58.836    "state": "configuring",
00:10:58.836    "raid_level": "raid0",
00:10:58.836    "superblock": false,
00:10:58.836    "num_base_bdevs": 4,
00:10:58.836    "num_base_bdevs_discovered": 0,
00:10:58.836    "num_base_bdevs_operational": 4,
00:10:58.836    "base_bdevs_list": [
00:10:58.836      {
00:10:58.836        "name": "BaseBdev1",
00:10:58.836        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:58.836        "is_configured": false,
00:10:58.836        "data_offset": 0,
00:10:58.836        "data_size": 0
00:10:58.836      },
00:10:58.836      {
00:10:58.836        "name": "BaseBdev2",
00:10:58.836        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:58.836        "is_configured": false,
00:10:58.836        "data_offset": 0,
00:10:58.836        "data_size": 0
00:10:58.836      },
00:10:58.836      {
00:10:58.836        "name": "BaseBdev3",
00:10:58.836        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:58.836        "is_configured": false,
00:10:58.836        "data_offset": 0,
00:10:58.836        "data_size": 0
00:10:58.836      },
00:10:58.836      {
00:10:58.836        "name": "BaseBdev4",
00:10:58.836        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:58.836        "is_configured": false,
00:10:58.836        "data_offset": 0,
00:10:58.836        "data_size": 0
00:10:58.836      }
00:10:58.836    ]
00:10:58.836  }'
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:58.836   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.095   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:59.095   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.095   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.095  [2024-12-16 11:32:24.995258] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:59.095  [2024-12-16 11:32:24.995380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:10:59.095   11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.095   11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.095  [2024-12-16 11:32:25.007281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:10:59.095  [2024-12-16 11:32:25.007373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:10:59.095  [2024-12-16 11:32:25.007404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:59.095  [2024-12-16 11:32:25.007428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:59.095  [2024-12-16 11:32:25.007455] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:59.095  [2024-12-16 11:32:25.007528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:59.095  [2024-12-16 11:32:25.007582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:10:59.095  [2024-12-16 11:32:25.007613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.095  [2024-12-16 11:32:25.027946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:59.095  BaseBdev1
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.095  [
00:10:59.095  {
00:10:59.095  "name": "BaseBdev1",
00:10:59.095  "aliases": [
00:10:59.095  "c7f78a8e-8e80-4d38-ba0a-112d2a610153"
00:10:59.095  ],
00:10:59.095  "product_name": "Malloc disk",
00:10:59.095  "block_size": 512,
00:10:59.095  "num_blocks": 65536,
00:10:59.095  "uuid": "c7f78a8e-8e80-4d38-ba0a-112d2a610153",
00:10:59.095  "assigned_rate_limits": {
00:10:59.095  "rw_ios_per_sec": 0,
00:10:59.095  "rw_mbytes_per_sec": 0,
00:10:59.095  "r_mbytes_per_sec": 0,
00:10:59.095  "w_mbytes_per_sec": 0
00:10:59.095  },
00:10:59.095  "claimed": true,
00:10:59.095  "claim_type": "exclusive_write",
00:10:59.095  "zoned": false,
00:10:59.095  "supported_io_types": {
00:10:59.095  "read": true,
00:10:59.095  "write": true,
00:10:59.095  "unmap": true,
00:10:59.095  "flush": true,
00:10:59.095  "reset": true,
00:10:59.095  "nvme_admin": false,
00:10:59.095  "nvme_io": false,
00:10:59.095  "nvme_io_md": false,
00:10:59.095  "write_zeroes": true,
00:10:59.095  "zcopy": true,
00:10:59.095  "get_zone_info": false,
00:10:59.095  "zone_management": false,
00:10:59.095  "zone_append": false,
00:10:59.095  "compare": false,
00:10:59.095  "compare_and_write": false,
00:10:59.095  "abort": true,
00:10:59.095  "seek_hole": false,
00:10:59.095  "seek_data": false,
00:10:59.095  "copy": true,
00:10:59.095  "nvme_iov_md": false
00:10:59.095  },
00:10:59.095  "memory_domains": [
00:10:59.095  {
00:10:59.095  "dma_device_id": "system",
00:10:59.095  "dma_device_type": 1
00:10:59.095  },
00:10:59.095  {
00:10:59.095  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:10:59.095  "dma_device_type": 2
00:10:59.095  }
00:10:59.095  ],
00:10:59.095  "driver_specific": {}
00:10:59.095  }
00:10:59.095  ]
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:59.095   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:59.096    11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:59.096    11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.096    11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.096    11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:59.096    11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.096   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:59.096    "name": "Existed_Raid",
00:10:59.096    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.096    "strip_size_kb": 64,
00:10:59.096    "state": "configuring",
00:10:59.096    "raid_level": "raid0",
00:10:59.096    "superblock": false,
00:10:59.096    "num_base_bdevs": 4,
00:10:59.096    "num_base_bdevs_discovered": 1,
00:10:59.096    "num_base_bdevs_operational": 4,
00:10:59.096    "base_bdevs_list": [
00:10:59.096      {
00:10:59.096        "name": "BaseBdev1",
00:10:59.096        "uuid": "c7f78a8e-8e80-4d38-ba0a-112d2a610153",
00:10:59.096        "is_configured": true,
00:10:59.096        "data_offset": 0,
00:10:59.096        "data_size": 65536
00:10:59.096      },
00:10:59.096      {
00:10:59.096        "name": "BaseBdev2",
00:10:59.096        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.096        "is_configured": false,
00:10:59.096        "data_offset": 0,
00:10:59.096        "data_size": 0
00:10:59.096      },
00:10:59.096      {
00:10:59.096        "name": "BaseBdev3",
00:10:59.096        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.096        "is_configured": false,
00:10:59.096        "data_offset": 0,
00:10:59.096        "data_size": 0
00:10:59.096      },
00:10:59.096      {
00:10:59.096        "name": "BaseBdev4",
00:10:59.096        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.096        "is_configured": false,
00:10:59.096        "data_offset": 0,
00:10:59.096        "data_size": 0
00:10:59.096      }
00:10:59.096    ]
00:10:59.096  }'
00:10:59.096   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:59.096   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.662  [2024-12-16 11:32:25.543137] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:10:59.662  [2024-12-16 11:32:25.543294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.662  [2024-12-16 11:32:25.555138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:10:59.662  [2024-12-16 11:32:25.557069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:10:59.662  [2024-12-16 11:32:25.557116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:10:59.662  [2024-12-16 11:32:25.557126] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:10:59.662  [2024-12-16 11:32:25.557134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:10:59.662  [2024-12-16 11:32:25.557140] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:10:59.662  [2024-12-16 11:32:25.557148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:10:59.662    11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:10:59.662    11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:10:59.662    11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.662    11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:10:59.662    11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:10:59.662    "name": "Existed_Raid",
00:10:59.662    "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.662    "strip_size_kb": 64,
00:10:59.662    "state": "configuring",
00:10:59.662    "raid_level": "raid0",
00:10:59.662    "superblock": false,
00:10:59.662    "num_base_bdevs": 4,
00:10:59.662    "num_base_bdevs_discovered": 1,
00:10:59.662    "num_base_bdevs_operational": 4,
00:10:59.662    "base_bdevs_list": [
00:10:59.662      {
00:10:59.662        "name": "BaseBdev1",
00:10:59.662        "uuid": "c7f78a8e-8e80-4d38-ba0a-112d2a610153",
00:10:59.662        "is_configured": true,
00:10:59.662        "data_offset": 0,
00:10:59.662        "data_size": 65536
00:10:59.662      },
00:10:59.662      {
00:10:59.662        "name": "BaseBdev2",
00:10:59.662        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.662        "is_configured": false,
00:10:59.662        "data_offset": 0,
00:10:59.662        "data_size": 0
00:10:59.662      },
00:10:59.662      {
00:10:59.662        "name": "BaseBdev3",
00:10:59.662        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.662        "is_configured": false,
00:10:59.662        "data_offset": 0,
00:10:59.662        "data_size": 0
00:10:59.662      },
00:10:59.662      {
00:10:59.662        "name": "BaseBdev4",
00:10:59.662        "uuid": "00000000-0000-0000-0000-000000000000",
00:10:59.662        "is_configured": false,
00:10:59.662        "data_offset": 0,
00:10:59.662        "data_size": 0
00:10:59.662      }
00:10:59.662    ]
00:10:59.662  }'
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:10:59.662   11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.230  [2024-12-16 11:32:26.080124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:00.230  BaseBdev2
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.230  [
00:11:00.230  {
00:11:00.230  "name": "BaseBdev2",
00:11:00.230  "aliases": [
00:11:00.230  "2406b4ee-daca-49cb-949e-087a31bccfbd"
00:11:00.230  ],
00:11:00.230  "product_name": "Malloc disk",
00:11:00.230  "block_size": 512,
00:11:00.230  "num_blocks": 65536,
00:11:00.230  "uuid": "2406b4ee-daca-49cb-949e-087a31bccfbd",
00:11:00.230  "assigned_rate_limits": {
00:11:00.230  "rw_ios_per_sec": 0,
00:11:00.230  "rw_mbytes_per_sec": 0,
00:11:00.230  "r_mbytes_per_sec": 0,
00:11:00.230  "w_mbytes_per_sec": 0
00:11:00.230  },
00:11:00.230  "claimed": true,
00:11:00.230  "claim_type": "exclusive_write",
00:11:00.230  "zoned": false,
00:11:00.230  "supported_io_types": {
00:11:00.230  "read": true,
00:11:00.230  "write": true,
00:11:00.230  "unmap": true,
00:11:00.230  "flush": true,
00:11:00.230  "reset": true,
00:11:00.230  "nvme_admin": false,
00:11:00.230  "nvme_io": false,
00:11:00.230  "nvme_io_md": false,
00:11:00.230  "write_zeroes": true,
00:11:00.230  "zcopy": true,
00:11:00.230  "get_zone_info": false,
00:11:00.230  "zone_management": false,
00:11:00.230  "zone_append": false,
00:11:00.230  "compare": false,
00:11:00.230  "compare_and_write": false,
00:11:00.230  "abort": true,
00:11:00.230  "seek_hole": false,
00:11:00.230  "seek_data": false,
00:11:00.230  "copy": true,
00:11:00.230  "nvme_iov_md": false
00:11:00.230  },
00:11:00.230  "memory_domains": [
00:11:00.230  {
00:11:00.230  "dma_device_id": "system",
00:11:00.230  "dma_device_type": 1
00:11:00.230  },
00:11:00.230  {
00:11:00.230  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:00.230  "dma_device_type": 2
00:11:00.230  }
00:11:00.230  ],
00:11:00.230  "driver_specific": {}
00:11:00.230  }
00:11:00.230  ]
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:00.230    11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:00.230    11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.230    11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.230    11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:00.230    11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.230   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:00.230    "name": "Existed_Raid",
00:11:00.230    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:00.230    "strip_size_kb": 64,
00:11:00.230    "state": "configuring",
00:11:00.230    "raid_level": "raid0",
00:11:00.230    "superblock": false,
00:11:00.230    "num_base_bdevs": 4,
00:11:00.230    "num_base_bdevs_discovered": 2,
00:11:00.230    "num_base_bdevs_operational": 4,
00:11:00.230    "base_bdevs_list": [
00:11:00.230      {
00:11:00.230        "name": "BaseBdev1",
00:11:00.230        "uuid": "c7f78a8e-8e80-4d38-ba0a-112d2a610153",
00:11:00.230        "is_configured": true,
00:11:00.230        "data_offset": 0,
00:11:00.230        "data_size": 65536
00:11:00.230      },
00:11:00.230      {
00:11:00.230        "name": "BaseBdev2",
00:11:00.230        "uuid": "2406b4ee-daca-49cb-949e-087a31bccfbd",
00:11:00.230        "is_configured": true,
00:11:00.230        "data_offset": 0,
00:11:00.231        "data_size": 65536
00:11:00.231      },
00:11:00.231      {
00:11:00.231        "name": "BaseBdev3",
00:11:00.231        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:00.231        "is_configured": false,
00:11:00.231        "data_offset": 0,
00:11:00.231        "data_size": 0
00:11:00.231      },
00:11:00.231      {
00:11:00.231        "name": "BaseBdev4",
00:11:00.231        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:00.231        "is_configured": false,
00:11:00.231        "data_offset": 0,
00:11:00.231        "data_size": 0
00:11:00.231      }
00:11:00.231    ]
00:11:00.231  }'
00:11:00.231   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:00.231   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.489   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:00.489   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.489   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.748  [2024-12-16 11:32:26.558653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:00.748  BaseBdev3
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.748  [
00:11:00.748  {
00:11:00.748  "name": "BaseBdev3",
00:11:00.748  "aliases": [
00:11:00.748  "2413662a-89b2-459a-85d9-a4633ac89ce2"
00:11:00.748  ],
00:11:00.748  "product_name": "Malloc disk",
00:11:00.748  "block_size": 512,
00:11:00.748  "num_blocks": 65536,
00:11:00.748  "uuid": "2413662a-89b2-459a-85d9-a4633ac89ce2",
00:11:00.748  "assigned_rate_limits": {
00:11:00.748  "rw_ios_per_sec": 0,
00:11:00.748  "rw_mbytes_per_sec": 0,
00:11:00.748  "r_mbytes_per_sec": 0,
00:11:00.748  "w_mbytes_per_sec": 0
00:11:00.748  },
00:11:00.748  "claimed": true,
00:11:00.748  "claim_type": "exclusive_write",
00:11:00.748  "zoned": false,
00:11:00.748  "supported_io_types": {
00:11:00.748  "read": true,
00:11:00.748  "write": true,
00:11:00.748  "unmap": true,
00:11:00.748  "flush": true,
00:11:00.748  "reset": true,
00:11:00.748  "nvme_admin": false,
00:11:00.748  "nvme_io": false,
00:11:00.748  "nvme_io_md": false,
00:11:00.748  "write_zeroes": true,
00:11:00.748  "zcopy": true,
00:11:00.748  "get_zone_info": false,
00:11:00.748  "zone_management": false,
00:11:00.748  "zone_append": false,
00:11:00.748  "compare": false,
00:11:00.748  "compare_and_write": false,
00:11:00.748  "abort": true,
00:11:00.748  "seek_hole": false,
00:11:00.748  "seek_data": false,
00:11:00.748  "copy": true,
00:11:00.748  "nvme_iov_md": false
00:11:00.748  },
00:11:00.748  "memory_domains": [
00:11:00.748  {
00:11:00.748  "dma_device_id": "system",
00:11:00.748  "dma_device_type": 1
00:11:00.748  },
00:11:00.748  {
00:11:00.748  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:00.748  "dma_device_type": 2
00:11:00.748  }
00:11:00.748  ],
00:11:00.748  "driver_specific": {}
00:11:00.748  }
00:11:00.748  ]
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:00.748    11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:00.748    11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:00.748    11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:00.748    11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:00.748    11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:00.748   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:00.748    "name": "Existed_Raid",
00:11:00.748    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:00.748    "strip_size_kb": 64,
00:11:00.748    "state": "configuring",
00:11:00.748    "raid_level": "raid0",
00:11:00.748    "superblock": false,
00:11:00.748    "num_base_bdevs": 4,
00:11:00.748    "num_base_bdevs_discovered": 3,
00:11:00.748    "num_base_bdevs_operational": 4,
00:11:00.748    "base_bdevs_list": [
00:11:00.749      {
00:11:00.749        "name": "BaseBdev1",
00:11:00.749        "uuid": "c7f78a8e-8e80-4d38-ba0a-112d2a610153",
00:11:00.749        "is_configured": true,
00:11:00.749        "data_offset": 0,
00:11:00.749        "data_size": 65536
00:11:00.749      },
00:11:00.749      {
00:11:00.749        "name": "BaseBdev2",
00:11:00.749        "uuid": "2406b4ee-daca-49cb-949e-087a31bccfbd",
00:11:00.749        "is_configured": true,
00:11:00.749        "data_offset": 0,
00:11:00.749        "data_size": 65536
00:11:00.749      },
00:11:00.749      {
00:11:00.749        "name": "BaseBdev3",
00:11:00.749        "uuid": "2413662a-89b2-459a-85d9-a4633ac89ce2",
00:11:00.749        "is_configured": true,
00:11:00.749        "data_offset": 0,
00:11:00.749        "data_size": 65536
00:11:00.749      },
00:11:00.749      {
00:11:00.749        "name": "BaseBdev4",
00:11:00.749        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:00.749        "is_configured": false,
00:11:00.749        "data_offset": 0,
00:11:00.749        "data_size": 0
00:11:00.749      }
00:11:00.749    ]
00:11:00.749  }'
00:11:00.749   11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:00.749   11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.008   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:01.008   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.008   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.266  [2024-12-16 11:32:27.077003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:01.266  [2024-12-16 11:32:27.077136] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:11:01.266  [2024-12-16 11:32:27.077151] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:11:01.266  [2024-12-16 11:32:27.077488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:01.266  [2024-12-16 11:32:27.077670] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:11:01.266  [2024-12-16 11:32:27.077684] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:11:01.266  [2024-12-16 11:32:27.077891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:01.266  BaseBdev4
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.266   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.266  [
00:11:01.266  {
00:11:01.266  "name": "BaseBdev4",
00:11:01.266  "aliases": [
00:11:01.266  "36ce6de1-1d7a-4236-8f6e-c6ef753cca46"
00:11:01.266  ],
00:11:01.266  "product_name": "Malloc disk",
00:11:01.267  "block_size": 512,
00:11:01.267  "num_blocks": 65536,
00:11:01.267  "uuid": "36ce6de1-1d7a-4236-8f6e-c6ef753cca46",
00:11:01.267  "assigned_rate_limits": {
00:11:01.267  "rw_ios_per_sec": 0,
00:11:01.267  "rw_mbytes_per_sec": 0,
00:11:01.267  "r_mbytes_per_sec": 0,
00:11:01.267  "w_mbytes_per_sec": 0
00:11:01.267  },
00:11:01.267  "claimed": true,
00:11:01.267  "claim_type": "exclusive_write",
00:11:01.267  "zoned": false,
00:11:01.267  "supported_io_types": {
00:11:01.267  "read": true,
00:11:01.267  "write": true,
00:11:01.267  "unmap": true,
00:11:01.267  "flush": true,
00:11:01.267  "reset": true,
00:11:01.267  "nvme_admin": false,
00:11:01.267  "nvme_io": false,
00:11:01.267  "nvme_io_md": false,
00:11:01.267  "write_zeroes": true,
00:11:01.267  "zcopy": true,
00:11:01.267  "get_zone_info": false,
00:11:01.267  "zone_management": false,
00:11:01.267  "zone_append": false,
00:11:01.267  "compare": false,
00:11:01.267  "compare_and_write": false,
00:11:01.267  "abort": true,
00:11:01.267  "seek_hole": false,
00:11:01.267  "seek_data": false,
00:11:01.267  "copy": true,
00:11:01.267  "nvme_iov_md": false
00:11:01.267  },
00:11:01.267  "memory_domains": [
00:11:01.267  {
00:11:01.267  "dma_device_id": "system",
00:11:01.267  "dma_device_type": 1
00:11:01.267  },
00:11:01.267  {
00:11:01.267  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:01.267  "dma_device_type": 2
00:11:01.267  }
00:11:01.267  ],
00:11:01.267  "driver_specific": {}
00:11:01.267  }
00:11:01.267  ]
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:01.267    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:01.267    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:01.267    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.267    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.267    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:01.267    "name": "Existed_Raid",
00:11:01.267    "uuid": "c85c467d-b0eb-4edc-85b3-8772bf08be51",
00:11:01.267    "strip_size_kb": 64,
00:11:01.267    "state": "online",
00:11:01.267    "raid_level": "raid0",
00:11:01.267    "superblock": false,
00:11:01.267    "num_base_bdevs": 4,
00:11:01.267    "num_base_bdevs_discovered": 4,
00:11:01.267    "num_base_bdevs_operational": 4,
00:11:01.267    "base_bdevs_list": [
00:11:01.267      {
00:11:01.267        "name": "BaseBdev1",
00:11:01.267        "uuid": "c7f78a8e-8e80-4d38-ba0a-112d2a610153",
00:11:01.267        "is_configured": true,
00:11:01.267        "data_offset": 0,
00:11:01.267        "data_size": 65536
00:11:01.267      },
00:11:01.267      {
00:11:01.267        "name": "BaseBdev2",
00:11:01.267        "uuid": "2406b4ee-daca-49cb-949e-087a31bccfbd",
00:11:01.267        "is_configured": true,
00:11:01.267        "data_offset": 0,
00:11:01.267        "data_size": 65536
00:11:01.267      },
00:11:01.267      {
00:11:01.267        "name": "BaseBdev3",
00:11:01.267        "uuid": "2413662a-89b2-459a-85d9-a4633ac89ce2",
00:11:01.267        "is_configured": true,
00:11:01.267        "data_offset": 0,
00:11:01.267        "data_size": 65536
00:11:01.267      },
00:11:01.267      {
00:11:01.267        "name": "BaseBdev4",
00:11:01.267        "uuid": "36ce6de1-1d7a-4236-8f6e-c6ef753cca46",
00:11:01.267        "is_configured": true,
00:11:01.267        "data_offset": 0,
00:11:01.267        "data_size": 65536
00:11:01.267      }
00:11:01.267    ]
00:11:01.267  }'
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:01.267   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.526   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:11:01.526   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:01.526   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:01.526   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:01.526   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:01.526   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:01.526    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:01.526    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.526    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:01.526    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.526  [2024-12-16 11:32:27.576653] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:01.526    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.785   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:01.785    "name": "Existed_Raid",
00:11:01.785    "aliases": [
00:11:01.786      "c85c467d-b0eb-4edc-85b3-8772bf08be51"
00:11:01.786    ],
00:11:01.786    "product_name": "Raid Volume",
00:11:01.786    "block_size": 512,
00:11:01.786    "num_blocks": 262144,
00:11:01.786    "uuid": "c85c467d-b0eb-4edc-85b3-8772bf08be51",
00:11:01.786    "assigned_rate_limits": {
00:11:01.786      "rw_ios_per_sec": 0,
00:11:01.786      "rw_mbytes_per_sec": 0,
00:11:01.786      "r_mbytes_per_sec": 0,
00:11:01.786      "w_mbytes_per_sec": 0
00:11:01.786    },
00:11:01.786    "claimed": false,
00:11:01.786    "zoned": false,
00:11:01.786    "supported_io_types": {
00:11:01.786      "read": true,
00:11:01.786      "write": true,
00:11:01.786      "unmap": true,
00:11:01.786      "flush": true,
00:11:01.786      "reset": true,
00:11:01.786      "nvme_admin": false,
00:11:01.786      "nvme_io": false,
00:11:01.786      "nvme_io_md": false,
00:11:01.786      "write_zeroes": true,
00:11:01.786      "zcopy": false,
00:11:01.786      "get_zone_info": false,
00:11:01.786      "zone_management": false,
00:11:01.786      "zone_append": false,
00:11:01.786      "compare": false,
00:11:01.786      "compare_and_write": false,
00:11:01.786      "abort": false,
00:11:01.786      "seek_hole": false,
00:11:01.786      "seek_data": false,
00:11:01.786      "copy": false,
00:11:01.786      "nvme_iov_md": false
00:11:01.786    },
00:11:01.786    "memory_domains": [
00:11:01.786      {
00:11:01.786        "dma_device_id": "system",
00:11:01.786        "dma_device_type": 1
00:11:01.786      },
00:11:01.786      {
00:11:01.786        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:01.786        "dma_device_type": 2
00:11:01.786      },
00:11:01.786      {
00:11:01.786        "dma_device_id": "system",
00:11:01.786        "dma_device_type": 1
00:11:01.786      },
00:11:01.786      {
00:11:01.786        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:01.786        "dma_device_type": 2
00:11:01.786      },
00:11:01.786      {
00:11:01.786        "dma_device_id": "system",
00:11:01.786        "dma_device_type": 1
00:11:01.786      },
00:11:01.786      {
00:11:01.786        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:01.786        "dma_device_type": 2
00:11:01.786      },
00:11:01.786      {
00:11:01.786        "dma_device_id": "system",
00:11:01.786        "dma_device_type": 1
00:11:01.786      },
00:11:01.786      {
00:11:01.786        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:01.786        "dma_device_type": 2
00:11:01.786      }
00:11:01.786    ],
00:11:01.786    "driver_specific": {
00:11:01.786      "raid": {
00:11:01.786        "uuid": "c85c467d-b0eb-4edc-85b3-8772bf08be51",
00:11:01.786        "strip_size_kb": 64,
00:11:01.786        "state": "online",
00:11:01.786        "raid_level": "raid0",
00:11:01.786        "superblock": false,
00:11:01.786        "num_base_bdevs": 4,
00:11:01.786        "num_base_bdevs_discovered": 4,
00:11:01.786        "num_base_bdevs_operational": 4,
00:11:01.786        "base_bdevs_list": [
00:11:01.786          {
00:11:01.786            "name": "BaseBdev1",
00:11:01.786            "uuid": "c7f78a8e-8e80-4d38-ba0a-112d2a610153",
00:11:01.786            "is_configured": true,
00:11:01.786            "data_offset": 0,
00:11:01.786            "data_size": 65536
00:11:01.786          },
00:11:01.786          {
00:11:01.786            "name": "BaseBdev2",
00:11:01.786            "uuid": "2406b4ee-daca-49cb-949e-087a31bccfbd",
00:11:01.786            "is_configured": true,
00:11:01.786            "data_offset": 0,
00:11:01.786            "data_size": 65536
00:11:01.786          },
00:11:01.786          {
00:11:01.786            "name": "BaseBdev3",
00:11:01.786            "uuid": "2413662a-89b2-459a-85d9-a4633ac89ce2",
00:11:01.786            "is_configured": true,
00:11:01.786            "data_offset": 0,
00:11:01.786            "data_size": 65536
00:11:01.786          },
00:11:01.786          {
00:11:01.786            "name": "BaseBdev4",
00:11:01.786            "uuid": "36ce6de1-1d7a-4236-8f6e-c6ef753cca46",
00:11:01.786            "is_configured": true,
00:11:01.786            "data_offset": 0,
00:11:01.786            "data_size": 65536
00:11:01.786          }
00:11:01.786        ]
00:11:01.786      }
00:11:01.786    }
00:11:01.786  }'
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:11:01.786  BaseBdev2
00:11:01.786  BaseBdev3
00:11:01.786  BaseBdev4'
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:01.786   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:01.786    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:02.046    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.046  [2024-12-16 11:32:27.903710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:02.046  [2024-12-16 11:32:27.903741] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:02.046  [2024-12-16 11:32:27.903812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:02.046    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:02.046    11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:02.046    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.046    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.046    11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:02.046    "name": "Existed_Raid",
00:11:02.046    "uuid": "c85c467d-b0eb-4edc-85b3-8772bf08be51",
00:11:02.046    "strip_size_kb": 64,
00:11:02.046    "state": "offline",
00:11:02.046    "raid_level": "raid0",
00:11:02.046    "superblock": false,
00:11:02.046    "num_base_bdevs": 4,
00:11:02.046    "num_base_bdevs_discovered": 3,
00:11:02.046    "num_base_bdevs_operational": 3,
00:11:02.046    "base_bdevs_list": [
00:11:02.046      {
00:11:02.046        "name": null,
00:11:02.046        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:02.046        "is_configured": false,
00:11:02.046        "data_offset": 0,
00:11:02.046        "data_size": 65536
00:11:02.046      },
00:11:02.046      {
00:11:02.046        "name": "BaseBdev2",
00:11:02.046        "uuid": "2406b4ee-daca-49cb-949e-087a31bccfbd",
00:11:02.046        "is_configured": true,
00:11:02.046        "data_offset": 0,
00:11:02.046        "data_size": 65536
00:11:02.046      },
00:11:02.046      {
00:11:02.046        "name": "BaseBdev3",
00:11:02.046        "uuid": "2413662a-89b2-459a-85d9-a4633ac89ce2",
00:11:02.046        "is_configured": true,
00:11:02.046        "data_offset": 0,
00:11:02.046        "data_size": 65536
00:11:02.046      },
00:11:02.046      {
00:11:02.046        "name": "BaseBdev4",
00:11:02.046        "uuid": "36ce6de1-1d7a-4236-8f6e-c6ef753cca46",
00:11:02.046        "is_configured": true,
00:11:02.046        "data_offset": 0,
00:11:02.046        "data_size": 65536
00:11:02.046      }
00:11:02.046    ]
00:11:02.046  }'
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:02.046   11:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.615   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:11:02.615   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616  [2024-12-16 11:32:28.438334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616  [2024-12-16 11:32:28.509551] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616  [2024-12-16 11:32:28.580939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:11:02.616  [2024-12-16 11:32:28.581035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616  BaseBdev2
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.616   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.876  [
00:11:02.876  {
00:11:02.876  "name": "BaseBdev2",
00:11:02.876  "aliases": [
00:11:02.876  "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4"
00:11:02.876  ],
00:11:02.876  "product_name": "Malloc disk",
00:11:02.876  "block_size": 512,
00:11:02.876  "num_blocks": 65536,
00:11:02.876  "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:02.876  "assigned_rate_limits": {
00:11:02.876  "rw_ios_per_sec": 0,
00:11:02.876  "rw_mbytes_per_sec": 0,
00:11:02.876  "r_mbytes_per_sec": 0,
00:11:02.876  "w_mbytes_per_sec": 0
00:11:02.876  },
00:11:02.876  "claimed": false,
00:11:02.876  "zoned": false,
00:11:02.876  "supported_io_types": {
00:11:02.876  "read": true,
00:11:02.876  "write": true,
00:11:02.876  "unmap": true,
00:11:02.876  "flush": true,
00:11:02.876  "reset": true,
00:11:02.876  "nvme_admin": false,
00:11:02.876  "nvme_io": false,
00:11:02.876  "nvme_io_md": false,
00:11:02.876  "write_zeroes": true,
00:11:02.876  "zcopy": true,
00:11:02.876  "get_zone_info": false,
00:11:02.876  "zone_management": false,
00:11:02.876  "zone_append": false,
00:11:02.876  "compare": false,
00:11:02.876  "compare_and_write": false,
00:11:02.876  "abort": true,
00:11:02.876  "seek_hole": false,
00:11:02.876  "seek_data": false,
00:11:02.876  "copy": true,
00:11:02.876  "nvme_iov_md": false
00:11:02.876  },
00:11:02.876  "memory_domains": [
00:11:02.876  {
00:11:02.876  "dma_device_id": "system",
00:11:02.876  "dma_device_type": 1
00:11:02.876  },
00:11:02.876  {
00:11:02.876  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:02.876  "dma_device_type": 2
00:11:02.876  }
00:11:02.876  ],
00:11:02.876  "driver_specific": {}
00:11:02.876  }
00:11:02.876  ]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.876  BaseBdev3
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.876  [
00:11:02.876  {
00:11:02.876  "name": "BaseBdev3",
00:11:02.876  "aliases": [
00:11:02.876  "7069cb06-1e61-4cd2-bc88-69991edd79e1"
00:11:02.876  ],
00:11:02.876  "product_name": "Malloc disk",
00:11:02.876  "block_size": 512,
00:11:02.876  "num_blocks": 65536,
00:11:02.876  "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:02.876  "assigned_rate_limits": {
00:11:02.876  "rw_ios_per_sec": 0,
00:11:02.876  "rw_mbytes_per_sec": 0,
00:11:02.876  "r_mbytes_per_sec": 0,
00:11:02.876  "w_mbytes_per_sec": 0
00:11:02.876  },
00:11:02.876  "claimed": false,
00:11:02.876  "zoned": false,
00:11:02.876  "supported_io_types": {
00:11:02.876  "read": true,
00:11:02.876  "write": true,
00:11:02.876  "unmap": true,
00:11:02.876  "flush": true,
00:11:02.876  "reset": true,
00:11:02.876  "nvme_admin": false,
00:11:02.876  "nvme_io": false,
00:11:02.876  "nvme_io_md": false,
00:11:02.876  "write_zeroes": true,
00:11:02.876  "zcopy": true,
00:11:02.876  "get_zone_info": false,
00:11:02.876  "zone_management": false,
00:11:02.876  "zone_append": false,
00:11:02.876  "compare": false,
00:11:02.876  "compare_and_write": false,
00:11:02.876  "abort": true,
00:11:02.876  "seek_hole": false,
00:11:02.876  "seek_data": false,
00:11:02.876  "copy": true,
00:11:02.876  "nvme_iov_md": false
00:11:02.876  },
00:11:02.876  "memory_domains": [
00:11:02.876  {
00:11:02.876  "dma_device_id": "system",
00:11:02.876  "dma_device_type": 1
00:11:02.876  },
00:11:02.876  {
00:11:02.876  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:02.876  "dma_device_type": 2
00:11:02.876  }
00:11:02.876  ],
00:11:02.876  "driver_specific": {}
00:11:02.876  }
00:11:02.876  ]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.876  BaseBdev4
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.876   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.876  [
00:11:02.876  {
00:11:02.876  "name": "BaseBdev4",
00:11:02.876  "aliases": [
00:11:02.876  "27053c95-58ef-48da-86ee-9fd9d3ffe62a"
00:11:02.876  ],
00:11:02.876  "product_name": "Malloc disk",
00:11:02.876  "block_size": 512,
00:11:02.876  "num_blocks": 65536,
00:11:02.876  "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:02.876  "assigned_rate_limits": {
00:11:02.876  "rw_ios_per_sec": 0,
00:11:02.876  "rw_mbytes_per_sec": 0,
00:11:02.876  "r_mbytes_per_sec": 0,
00:11:02.876  "w_mbytes_per_sec": 0
00:11:02.876  },
00:11:02.876  "claimed": false,
00:11:02.877  "zoned": false,
00:11:02.877  "supported_io_types": {
00:11:02.877  "read": true,
00:11:02.877  "write": true,
00:11:02.877  "unmap": true,
00:11:02.877  "flush": true,
00:11:02.877  "reset": true,
00:11:02.877  "nvme_admin": false,
00:11:02.877  "nvme_io": false,
00:11:02.877  "nvme_io_md": false,
00:11:02.877  "write_zeroes": true,
00:11:02.877  "zcopy": true,
00:11:02.877  "get_zone_info": false,
00:11:02.877  "zone_management": false,
00:11:02.877  "zone_append": false,
00:11:02.877  "compare": false,
00:11:02.877  "compare_and_write": false,
00:11:02.877  "abort": true,
00:11:02.877  "seek_hole": false,
00:11:02.877  "seek_data": false,
00:11:02.877  "copy": true,
00:11:02.877  "nvme_iov_md": false
00:11:02.877  },
00:11:02.877  "memory_domains": [
00:11:02.877  {
00:11:02.877  "dma_device_id": "system",
00:11:02.877  "dma_device_type": 1
00:11:02.877  },
00:11:02.877  {
00:11:02.877  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:02.877  "dma_device_type": 2
00:11:02.877  }
00:11:02.877  ],
00:11:02.877  "driver_specific": {}
00:11:02.877  }
00:11:02.877  ]
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.877  [2024-12-16 11:32:28.811041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:02.877  [2024-12-16 11:32:28.811130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:02.877  [2024-12-16 11:32:28.811171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:02.877  [2024-12-16 11:32:28.813061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:02.877  [2024-12-16 11:32:28.813152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:02.877    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:02.877    11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:02.877    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:02.877    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:02.877    11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:02.877    "name": "Existed_Raid",
00:11:02.877    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:02.877    "strip_size_kb": 64,
00:11:02.877    "state": "configuring",
00:11:02.877    "raid_level": "raid0",
00:11:02.877    "superblock": false,
00:11:02.877    "num_base_bdevs": 4,
00:11:02.877    "num_base_bdevs_discovered": 3,
00:11:02.877    "num_base_bdevs_operational": 4,
00:11:02.877    "base_bdevs_list": [
00:11:02.877      {
00:11:02.877        "name": "BaseBdev1",
00:11:02.877        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:02.877        "is_configured": false,
00:11:02.877        "data_offset": 0,
00:11:02.877        "data_size": 0
00:11:02.877      },
00:11:02.877      {
00:11:02.877        "name": "BaseBdev2",
00:11:02.877        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:02.877        "is_configured": true,
00:11:02.877        "data_offset": 0,
00:11:02.877        "data_size": 65536
00:11:02.877      },
00:11:02.877      {
00:11:02.877        "name": "BaseBdev3",
00:11:02.877        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:02.877        "is_configured": true,
00:11:02.877        "data_offset": 0,
00:11:02.877        "data_size": 65536
00:11:02.877      },
00:11:02.877      {
00:11:02.877        "name": "BaseBdev4",
00:11:02.877        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:02.877        "is_configured": true,
00:11:02.877        "data_offset": 0,
00:11:02.877        "data_size": 65536
00:11:02.877      }
00:11:02.877    ]
00:11:02.877  }'
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:02.877   11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.445  [2024-12-16 11:32:29.246352] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:03.445    11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:03.445    11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:03.445    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:03.445    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.445    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:03.445   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:03.445    "name": "Existed_Raid",
00:11:03.446    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:03.446    "strip_size_kb": 64,
00:11:03.446    "state": "configuring",
00:11:03.446    "raid_level": "raid0",
00:11:03.446    "superblock": false,
00:11:03.446    "num_base_bdevs": 4,
00:11:03.446    "num_base_bdevs_discovered": 2,
00:11:03.446    "num_base_bdevs_operational": 4,
00:11:03.446    "base_bdevs_list": [
00:11:03.446      {
00:11:03.446        "name": "BaseBdev1",
00:11:03.446        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:03.446        "is_configured": false,
00:11:03.446        "data_offset": 0,
00:11:03.446        "data_size": 0
00:11:03.446      },
00:11:03.446      {
00:11:03.446        "name": null,
00:11:03.446        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:03.446        "is_configured": false,
00:11:03.446        "data_offset": 0,
00:11:03.446        "data_size": 65536
00:11:03.446      },
00:11:03.446      {
00:11:03.446        "name": "BaseBdev3",
00:11:03.446        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:03.446        "is_configured": true,
00:11:03.446        "data_offset": 0,
00:11:03.446        "data_size": 65536
00:11:03.446      },
00:11:03.446      {
00:11:03.446        "name": "BaseBdev4",
00:11:03.446        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:03.446        "is_configured": true,
00:11:03.446        "data_offset": 0,
00:11:03.446        "data_size": 65536
00:11:03.446      }
00:11:03.446    ]
00:11:03.446  }'
00:11:03.446   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:03.446   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.705    11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:03.705    11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:03.705    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:03.705    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.705    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.705  [2024-12-16 11:32:29.756462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:03.705  BaseBdev1
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:03.705   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.964  [
00:11:03.964  {
00:11:03.964  "name": "BaseBdev1",
00:11:03.964  "aliases": [
00:11:03.964  "18f89dfa-d388-4d5c-b3e2-5d56b45802fd"
00:11:03.964  ],
00:11:03.964  "product_name": "Malloc disk",
00:11:03.964  "block_size": 512,
00:11:03.964  "num_blocks": 65536,
00:11:03.964  "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:03.964  "assigned_rate_limits": {
00:11:03.964  "rw_ios_per_sec": 0,
00:11:03.964  "rw_mbytes_per_sec": 0,
00:11:03.964  "r_mbytes_per_sec": 0,
00:11:03.964  "w_mbytes_per_sec": 0
00:11:03.964  },
00:11:03.964  "claimed": true,
00:11:03.964  "claim_type": "exclusive_write",
00:11:03.964  "zoned": false,
00:11:03.964  "supported_io_types": {
00:11:03.964  "read": true,
00:11:03.964  "write": true,
00:11:03.964  "unmap": true,
00:11:03.964  "flush": true,
00:11:03.964  "reset": true,
00:11:03.964  "nvme_admin": false,
00:11:03.964  "nvme_io": false,
00:11:03.964  "nvme_io_md": false,
00:11:03.964  "write_zeroes": true,
00:11:03.964  "zcopy": true,
00:11:03.964  "get_zone_info": false,
00:11:03.964  "zone_management": false,
00:11:03.964  "zone_append": false,
00:11:03.964  "compare": false,
00:11:03.964  "compare_and_write": false,
00:11:03.964  "abort": true,
00:11:03.964  "seek_hole": false,
00:11:03.964  "seek_data": false,
00:11:03.964  "copy": true,
00:11:03.964  "nvme_iov_md": false
00:11:03.964  },
00:11:03.964  "memory_domains": [
00:11:03.964  {
00:11:03.964  "dma_device_id": "system",
00:11:03.964  "dma_device_type": 1
00:11:03.964  },
00:11:03.964  {
00:11:03.964  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:03.964  "dma_device_type": 2
00:11:03.964  }
00:11:03.964  ],
00:11:03.964  "driver_specific": {}
00:11:03.964  }
00:11:03.964  ]
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:03.964    11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:03.964    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:03.964    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:03.964    11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:03.964    11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:03.964    "name": "Existed_Raid",
00:11:03.964    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:03.964    "strip_size_kb": 64,
00:11:03.964    "state": "configuring",
00:11:03.964    "raid_level": "raid0",
00:11:03.964    "superblock": false,
00:11:03.964    "num_base_bdevs": 4,
00:11:03.964    "num_base_bdevs_discovered": 3,
00:11:03.964    "num_base_bdevs_operational": 4,
00:11:03.964    "base_bdevs_list": [
00:11:03.964      {
00:11:03.964        "name": "BaseBdev1",
00:11:03.964        "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:03.964        "is_configured": true,
00:11:03.964        "data_offset": 0,
00:11:03.964        "data_size": 65536
00:11:03.964      },
00:11:03.964      {
00:11:03.964        "name": null,
00:11:03.964        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:03.964        "is_configured": false,
00:11:03.964        "data_offset": 0,
00:11:03.964        "data_size": 65536
00:11:03.964      },
00:11:03.964      {
00:11:03.964        "name": "BaseBdev3",
00:11:03.964        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:03.964        "is_configured": true,
00:11:03.964        "data_offset": 0,
00:11:03.964        "data_size": 65536
00:11:03.964      },
00:11:03.964      {
00:11:03.964        "name": "BaseBdev4",
00:11:03.964        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:03.964        "is_configured": true,
00:11:03.964        "data_offset": 0,
00:11:03.964        "data_size": 65536
00:11:03.964      }
00:11:03.964    ]
00:11:03.964  }'
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:03.964   11:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.224    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:04.224    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:04.224    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.224    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:04.224    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:04.224   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:11:04.224   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:11:04.224   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:04.224   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.483  [2024-12-16 11:32:30.291660] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:04.483    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:04.483    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:04.483    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.483    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:04.483    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:04.483   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:04.483    "name": "Existed_Raid",
00:11:04.483    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:04.484    "strip_size_kb": 64,
00:11:04.484    "state": "configuring",
00:11:04.484    "raid_level": "raid0",
00:11:04.484    "superblock": false,
00:11:04.484    "num_base_bdevs": 4,
00:11:04.484    "num_base_bdevs_discovered": 2,
00:11:04.484    "num_base_bdevs_operational": 4,
00:11:04.484    "base_bdevs_list": [
00:11:04.484      {
00:11:04.484        "name": "BaseBdev1",
00:11:04.484        "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:04.484        "is_configured": true,
00:11:04.484        "data_offset": 0,
00:11:04.484        "data_size": 65536
00:11:04.484      },
00:11:04.484      {
00:11:04.484        "name": null,
00:11:04.484        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:04.484        "is_configured": false,
00:11:04.484        "data_offset": 0,
00:11:04.484        "data_size": 65536
00:11:04.484      },
00:11:04.484      {
00:11:04.484        "name": null,
00:11:04.484        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:04.484        "is_configured": false,
00:11:04.484        "data_offset": 0,
00:11:04.484        "data_size": 65536
00:11:04.484      },
00:11:04.484      {
00:11:04.484        "name": "BaseBdev4",
00:11:04.484        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:04.484        "is_configured": true,
00:11:04.484        "data_offset": 0,
00:11:04.484        "data_size": 65536
00:11:04.484      }
00:11:04.484    ]
00:11:04.484  }'
00:11:04.484   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:04.484   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.796  [2024-12-16 11:32:30.766899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:04.796    11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:04.796   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:04.796    "name": "Existed_Raid",
00:11:04.796    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:04.796    "strip_size_kb": 64,
00:11:04.796    "state": "configuring",
00:11:04.796    "raid_level": "raid0",
00:11:04.796    "superblock": false,
00:11:04.796    "num_base_bdevs": 4,
00:11:04.796    "num_base_bdevs_discovered": 3,
00:11:04.796    "num_base_bdevs_operational": 4,
00:11:04.796    "base_bdevs_list": [
00:11:04.796      {
00:11:04.796        "name": "BaseBdev1",
00:11:04.796        "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:04.796        "is_configured": true,
00:11:04.796        "data_offset": 0,
00:11:04.796        "data_size": 65536
00:11:04.796      },
00:11:04.797      {
00:11:04.797        "name": null,
00:11:04.797        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:04.797        "is_configured": false,
00:11:04.797        "data_offset": 0,
00:11:04.797        "data_size": 65536
00:11:04.797      },
00:11:04.797      {
00:11:04.797        "name": "BaseBdev3",
00:11:04.797        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:04.797        "is_configured": true,
00:11:04.797        "data_offset": 0,
00:11:04.797        "data_size": 65536
00:11:04.797      },
00:11:04.797      {
00:11:04.797        "name": "BaseBdev4",
00:11:04.797        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:04.797        "is_configured": true,
00:11:04.797        "data_offset": 0,
00:11:04.797        "data_size": 65536
00:11:04.797      }
00:11:04.797    ]
00:11:04.797  }'
00:11:04.797   11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:04.797   11:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.366  [2024-12-16 11:32:31.254104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.366    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:05.366    "name": "Existed_Raid",
00:11:05.366    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:05.366    "strip_size_kb": 64,
00:11:05.366    "state": "configuring",
00:11:05.366    "raid_level": "raid0",
00:11:05.366    "superblock": false,
00:11:05.366    "num_base_bdevs": 4,
00:11:05.366    "num_base_bdevs_discovered": 2,
00:11:05.366    "num_base_bdevs_operational": 4,
00:11:05.366    "base_bdevs_list": [
00:11:05.366      {
00:11:05.366        "name": null,
00:11:05.366        "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:05.366        "is_configured": false,
00:11:05.366        "data_offset": 0,
00:11:05.366        "data_size": 65536
00:11:05.366      },
00:11:05.366      {
00:11:05.366        "name": null,
00:11:05.366        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:05.366        "is_configured": false,
00:11:05.366        "data_offset": 0,
00:11:05.366        "data_size": 65536
00:11:05.366      },
00:11:05.366      {
00:11:05.366        "name": "BaseBdev3",
00:11:05.366        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:05.366        "is_configured": true,
00:11:05.366        "data_offset": 0,
00:11:05.366        "data_size": 65536
00:11:05.366      },
00:11:05.366      {
00:11:05.366        "name": "BaseBdev4",
00:11:05.366        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:05.366        "is_configured": true,
00:11:05.366        "data_offset": 0,
00:11:05.366        "data_size": 65536
00:11:05.366      }
00:11:05.366    ]
00:11:05.366  }'
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:05.366   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.626    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:05.626    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:05.626    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:05.626    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.886    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.886  [2024-12-16 11:32:31.723756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:05.886    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:05.886    11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:05.886    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:05.886    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:05.886    11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:05.886    "name": "Existed_Raid",
00:11:05.886    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:05.886    "strip_size_kb": 64,
00:11:05.886    "state": "configuring",
00:11:05.886    "raid_level": "raid0",
00:11:05.886    "superblock": false,
00:11:05.886    "num_base_bdevs": 4,
00:11:05.886    "num_base_bdevs_discovered": 3,
00:11:05.886    "num_base_bdevs_operational": 4,
00:11:05.886    "base_bdevs_list": [
00:11:05.886      {
00:11:05.886        "name": null,
00:11:05.886        "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:05.886        "is_configured": false,
00:11:05.886        "data_offset": 0,
00:11:05.886        "data_size": 65536
00:11:05.886      },
00:11:05.886      {
00:11:05.886        "name": "BaseBdev2",
00:11:05.886        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:05.886        "is_configured": true,
00:11:05.886        "data_offset": 0,
00:11:05.886        "data_size": 65536
00:11:05.886      },
00:11:05.886      {
00:11:05.886        "name": "BaseBdev3",
00:11:05.886        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:05.886        "is_configured": true,
00:11:05.886        "data_offset": 0,
00:11:05.886        "data_size": 65536
00:11:05.886      },
00:11:05.886      {
00:11:05.886        "name": "BaseBdev4",
00:11:05.886        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:05.886        "is_configured": true,
00:11:05.886        "data_offset": 0,
00:11:05.886        "data_size": 65536
00:11:05.886      }
00:11:05.886    ]
00:11:05.886  }'
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:05.886   11:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.146    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:06.146    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.146    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.146    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:06.146    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:11:06.406    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:11:06.406    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:06.406    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.406    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.406    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18f89dfa-d388-4d5c-b3e2-5d56b45802fd
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.406  [2024-12-16 11:32:32.274049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:11:06.406  [2024-12-16 11:32:32.274095] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:11:06.406  [2024-12-16 11:32:32.274102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:11:06.406  [2024-12-16 11:32:32.274344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:11:06.406  [2024-12-16 11:32:32.274453] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:11:06.406  [2024-12-16 11:32:32.274465] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:11:06.406  [2024-12-16 11:32:32.274656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:06.406  NewBaseBdev
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.406   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.406  [
00:11:06.406  {
00:11:06.406  "name": "NewBaseBdev",
00:11:06.406  "aliases": [
00:11:06.406  "18f89dfa-d388-4d5c-b3e2-5d56b45802fd"
00:11:06.406  ],
00:11:06.406  "product_name": "Malloc disk",
00:11:06.406  "block_size": 512,
00:11:06.406  "num_blocks": 65536,
00:11:06.406  "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:06.406  "assigned_rate_limits": {
00:11:06.406  "rw_ios_per_sec": 0,
00:11:06.406  "rw_mbytes_per_sec": 0,
00:11:06.406  "r_mbytes_per_sec": 0,
00:11:06.406  "w_mbytes_per_sec": 0
00:11:06.406  },
00:11:06.406  "claimed": true,
00:11:06.406  "claim_type": "exclusive_write",
00:11:06.406  "zoned": false,
00:11:06.406  "supported_io_types": {
00:11:06.406  "read": true,
00:11:06.406  "write": true,
00:11:06.406  "unmap": true,
00:11:06.406  "flush": true,
00:11:06.406  "reset": true,
00:11:06.406  "nvme_admin": false,
00:11:06.407  "nvme_io": false,
00:11:06.407  "nvme_io_md": false,
00:11:06.407  "write_zeroes": true,
00:11:06.407  "zcopy": true,
00:11:06.407  "get_zone_info": false,
00:11:06.407  "zone_management": false,
00:11:06.407  "zone_append": false,
00:11:06.407  "compare": false,
00:11:06.407  "compare_and_write": false,
00:11:06.407  "abort": true,
00:11:06.407  "seek_hole": false,
00:11:06.407  "seek_data": false,
00:11:06.407  "copy": true,
00:11:06.407  "nvme_iov_md": false
00:11:06.407  },
00:11:06.407  "memory_domains": [
00:11:06.407  {
00:11:06.407  "dma_device_id": "system",
00:11:06.407  "dma_device_type": 1
00:11:06.407  },
00:11:06.407  {
00:11:06.407  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:06.407  "dma_device_type": 2
00:11:06.407  }
00:11:06.407  ],
00:11:06.407  "driver_specific": {}
00:11:06.407  }
00:11:06.407  ]
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:06.407    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:06.407    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:06.407    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.407    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.407    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:06.407    "name": "Existed_Raid",
00:11:06.407    "uuid": "76a52fe5-7677-4be4-b8fa-1efaf794b319",
00:11:06.407    "strip_size_kb": 64,
00:11:06.407    "state": "online",
00:11:06.407    "raid_level": "raid0",
00:11:06.407    "superblock": false,
00:11:06.407    "num_base_bdevs": 4,
00:11:06.407    "num_base_bdevs_discovered": 4,
00:11:06.407    "num_base_bdevs_operational": 4,
00:11:06.407    "base_bdevs_list": [
00:11:06.407      {
00:11:06.407        "name": "NewBaseBdev",
00:11:06.407        "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:06.407        "is_configured": true,
00:11:06.407        "data_offset": 0,
00:11:06.407        "data_size": 65536
00:11:06.407      },
00:11:06.407      {
00:11:06.407        "name": "BaseBdev2",
00:11:06.407        "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:06.407        "is_configured": true,
00:11:06.407        "data_offset": 0,
00:11:06.407        "data_size": 65536
00:11:06.407      },
00:11:06.407      {
00:11:06.407        "name": "BaseBdev3",
00:11:06.407        "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:06.407        "is_configured": true,
00:11:06.407        "data_offset": 0,
00:11:06.407        "data_size": 65536
00:11:06.407      },
00:11:06.407      {
00:11:06.407        "name": "BaseBdev4",
00:11:06.407        "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:06.407        "is_configured": true,
00:11:06.407        "data_offset": 0,
00:11:06.407        "data_size": 65536
00:11:06.407      }
00:11:06.407    ]
00:11:06.407  }'
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:06.407   11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.976   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:11:06.976   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:06.976   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:06.976   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:06.976   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:06.976   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:06.976    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:06.976    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.976    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.976    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:06.976  [2024-12-16 11:32:32.749652] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:06.976    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.976   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:06.976    "name": "Existed_Raid",
00:11:06.976    "aliases": [
00:11:06.976      "76a52fe5-7677-4be4-b8fa-1efaf794b319"
00:11:06.976    ],
00:11:06.976    "product_name": "Raid Volume",
00:11:06.976    "block_size": 512,
00:11:06.976    "num_blocks": 262144,
00:11:06.976    "uuid": "76a52fe5-7677-4be4-b8fa-1efaf794b319",
00:11:06.976    "assigned_rate_limits": {
00:11:06.976      "rw_ios_per_sec": 0,
00:11:06.976      "rw_mbytes_per_sec": 0,
00:11:06.976      "r_mbytes_per_sec": 0,
00:11:06.976      "w_mbytes_per_sec": 0
00:11:06.976    },
00:11:06.976    "claimed": false,
00:11:06.976    "zoned": false,
00:11:06.976    "supported_io_types": {
00:11:06.976      "read": true,
00:11:06.976      "write": true,
00:11:06.976      "unmap": true,
00:11:06.976      "flush": true,
00:11:06.976      "reset": true,
00:11:06.976      "nvme_admin": false,
00:11:06.976      "nvme_io": false,
00:11:06.976      "nvme_io_md": false,
00:11:06.976      "write_zeroes": true,
00:11:06.976      "zcopy": false,
00:11:06.976      "get_zone_info": false,
00:11:06.976      "zone_management": false,
00:11:06.976      "zone_append": false,
00:11:06.976      "compare": false,
00:11:06.976      "compare_and_write": false,
00:11:06.976      "abort": false,
00:11:06.976      "seek_hole": false,
00:11:06.976      "seek_data": false,
00:11:06.976      "copy": false,
00:11:06.976      "nvme_iov_md": false
00:11:06.976    },
00:11:06.976    "memory_domains": [
00:11:06.976      {
00:11:06.976        "dma_device_id": "system",
00:11:06.976        "dma_device_type": 1
00:11:06.976      },
00:11:06.976      {
00:11:06.976        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:06.976        "dma_device_type": 2
00:11:06.976      },
00:11:06.977      {
00:11:06.977        "dma_device_id": "system",
00:11:06.977        "dma_device_type": 1
00:11:06.977      },
00:11:06.977      {
00:11:06.977        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:06.977        "dma_device_type": 2
00:11:06.977      },
00:11:06.977      {
00:11:06.977        "dma_device_id": "system",
00:11:06.977        "dma_device_type": 1
00:11:06.977      },
00:11:06.977      {
00:11:06.977        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:06.977        "dma_device_type": 2
00:11:06.977      },
00:11:06.977      {
00:11:06.977        "dma_device_id": "system",
00:11:06.977        "dma_device_type": 1
00:11:06.977      },
00:11:06.977      {
00:11:06.977        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:06.977        "dma_device_type": 2
00:11:06.977      }
00:11:06.977    ],
00:11:06.977    "driver_specific": {
00:11:06.977      "raid": {
00:11:06.977        "uuid": "76a52fe5-7677-4be4-b8fa-1efaf794b319",
00:11:06.977        "strip_size_kb": 64,
00:11:06.977        "state": "online",
00:11:06.977        "raid_level": "raid0",
00:11:06.977        "superblock": false,
00:11:06.977        "num_base_bdevs": 4,
00:11:06.977        "num_base_bdevs_discovered": 4,
00:11:06.977        "num_base_bdevs_operational": 4,
00:11:06.977        "base_bdevs_list": [
00:11:06.977          {
00:11:06.977            "name": "NewBaseBdev",
00:11:06.977            "uuid": "18f89dfa-d388-4d5c-b3e2-5d56b45802fd",
00:11:06.977            "is_configured": true,
00:11:06.977            "data_offset": 0,
00:11:06.977            "data_size": 65536
00:11:06.977          },
00:11:06.977          {
00:11:06.977            "name": "BaseBdev2",
00:11:06.977            "uuid": "a3dceb54-860b-4a1b-89fa-927f7c1f7ec4",
00:11:06.977            "is_configured": true,
00:11:06.977            "data_offset": 0,
00:11:06.977            "data_size": 65536
00:11:06.977          },
00:11:06.977          {
00:11:06.977            "name": "BaseBdev3",
00:11:06.977            "uuid": "7069cb06-1e61-4cd2-bc88-69991edd79e1",
00:11:06.977            "is_configured": true,
00:11:06.977            "data_offset": 0,
00:11:06.977            "data_size": 65536
00:11:06.977          },
00:11:06.977          {
00:11:06.977            "name": "BaseBdev4",
00:11:06.977            "uuid": "27053c95-58ef-48da-86ee-9fd9d3ffe62a",
00:11:06.977            "is_configured": true,
00:11:06.977            "data_offset": 0,
00:11:06.977            "data_size": 65536
00:11:06.977          }
00:11:06.977        ]
00:11:06.977      }
00:11:06.977    }
00:11:06.977  }'
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:11:06.977  BaseBdev2
00:11:06.977  BaseBdev3
00:11:06.977  BaseBdev4'
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:06.977   11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:06.977    11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:06.977   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:06.977   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:06.977   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:06.977    11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:06.977    11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:06.977    11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:06.977    11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:07.236    11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:07.236  [2024-12-16 11:32:33.072756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:07.236  [2024-12-16 11:32:33.072845] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:07.236  [2024-12-16 11:32:33.072958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:07.236  [2024-12-16 11:32:33.073031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:07.236  [2024-12-16 11:32:33.073042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80670
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80670 ']'
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80670
00:11:07.236    11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:07.236    11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80670
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80670'
00:11:07.236  killing process with pid 80670
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80670
00:11:07.236  [2024-12-16 11:32:33.114073] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:07.236   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80670
00:11:07.236  [2024-12-16 11:32:33.156728] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:11:07.495  
00:11:07.495  real	0m9.833s
00:11:07.495  user	0m16.711s
00:11:07.495  sys	0m2.165s
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:07.495  ************************************
00:11:07.495  END TEST raid_state_function_test
00:11:07.495  ************************************
00:11:07.495   11:32:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true
00:11:07.495   11:32:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:07.495   11:32:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:07.495   11:32:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:07.495  ************************************
00:11:07.495  START TEST raid_state_function_test_sb
00:11:07.495  ************************************
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:07.495    11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']'
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81325
00:11:07.495   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:11:07.495  Process raid pid: 81325
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81325'
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81325
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81325 ']'
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:07.496  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:07.496   11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:07.755  [2024-12-16 11:32:33.575615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:07.755  [2024-12-16 11:32:33.575852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:07.755  [2024-12-16 11:32:33.736447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:07.755  [2024-12-16 11:32:33.785276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:08.015  [2024-12-16 11:32:33.827328] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:08.015  [2024-12-16 11:32:33.827370] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:08.583  [2024-12-16 11:32:34.468836] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:08.583  [2024-12-16 11:32:34.468893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:08.583  [2024-12-16 11:32:34.468905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:08.583  [2024-12-16 11:32:34.468915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:08.583  [2024-12-16 11:32:34.468921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:08.583  [2024-12-16 11:32:34.468932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:08.583  [2024-12-16 11:32:34.468938] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:08.583  [2024-12-16 11:32:34.468947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:08.583   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:08.583    11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:08.583    11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:08.583    11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:08.584    11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:08.584    11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:08.584   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:08.584    "name": "Existed_Raid",
00:11:08.584    "uuid": "5d0d8dc1-00c7-4d06-9d62-cb699c3c77a7",
00:11:08.584    "strip_size_kb": 64,
00:11:08.584    "state": "configuring",
00:11:08.584    "raid_level": "raid0",
00:11:08.584    "superblock": true,
00:11:08.584    "num_base_bdevs": 4,
00:11:08.584    "num_base_bdevs_discovered": 0,
00:11:08.584    "num_base_bdevs_operational": 4,
00:11:08.584    "base_bdevs_list": [
00:11:08.584      {
00:11:08.584        "name": "BaseBdev1",
00:11:08.584        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:08.584        "is_configured": false,
00:11:08.584        "data_offset": 0,
00:11:08.584        "data_size": 0
00:11:08.584      },
00:11:08.584      {
00:11:08.584        "name": "BaseBdev2",
00:11:08.584        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:08.584        "is_configured": false,
00:11:08.584        "data_offset": 0,
00:11:08.584        "data_size": 0
00:11:08.584      },
00:11:08.584      {
00:11:08.584        "name": "BaseBdev3",
00:11:08.584        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:08.584        "is_configured": false,
00:11:08.584        "data_offset": 0,
00:11:08.584        "data_size": 0
00:11:08.584      },
00:11:08.584      {
00:11:08.584        "name": "BaseBdev4",
00:11:08.584        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:08.584        "is_configured": false,
00:11:08.584        "data_offset": 0,
00:11:08.584        "data_size": 0
00:11:08.584      }
00:11:08.584    ]
00:11:08.584  }'
00:11:08.584   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:08.584   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:08.843   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:08.843   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:08.843   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:08.843  [2024-12-16 11:32:34.880020] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:08.843  [2024-12-16 11:32:34.880128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:11:08.843   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:08.843   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:08.843   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:08.843   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:08.843  [2024-12-16 11:32:34.892057] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:08.843  [2024-12-16 11:32:34.892104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:08.843  [2024-12-16 11:32:34.892113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:08.843  [2024-12-16 11:32:34.892122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:08.843  [2024-12-16 11:32:34.892129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:08.843  [2024-12-16 11:32:34.892138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:08.843  [2024-12-16 11:32:34.892144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:08.843  [2024-12-16 11:32:34.892152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:08.844   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:08.844   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:11:08.844   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:08.844   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.104  [2024-12-16 11:32:34.913262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:09.104  BaseBdev1
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.104  [
00:11:09.104  {
00:11:09.104  "name": "BaseBdev1",
00:11:09.104  "aliases": [
00:11:09.104  "30dbf86a-78f8-4f15-976d-514f5a49bc91"
00:11:09.104  ],
00:11:09.104  "product_name": "Malloc disk",
00:11:09.104  "block_size": 512,
00:11:09.104  "num_blocks": 65536,
00:11:09.104  "uuid": "30dbf86a-78f8-4f15-976d-514f5a49bc91",
00:11:09.104  "assigned_rate_limits": {
00:11:09.104  "rw_ios_per_sec": 0,
00:11:09.104  "rw_mbytes_per_sec": 0,
00:11:09.104  "r_mbytes_per_sec": 0,
00:11:09.104  "w_mbytes_per_sec": 0
00:11:09.104  },
00:11:09.104  "claimed": true,
00:11:09.104  "claim_type": "exclusive_write",
00:11:09.104  "zoned": false,
00:11:09.104  "supported_io_types": {
00:11:09.104  "read": true,
00:11:09.104  "write": true,
00:11:09.104  "unmap": true,
00:11:09.104  "flush": true,
00:11:09.104  "reset": true,
00:11:09.104  "nvme_admin": false,
00:11:09.104  "nvme_io": false,
00:11:09.104  "nvme_io_md": false,
00:11:09.104  "write_zeroes": true,
00:11:09.104  "zcopy": true,
00:11:09.104  "get_zone_info": false,
00:11:09.104  "zone_management": false,
00:11:09.104  "zone_append": false,
00:11:09.104  "compare": false,
00:11:09.104  "compare_and_write": false,
00:11:09.104  "abort": true,
00:11:09.104  "seek_hole": false,
00:11:09.104  "seek_data": false,
00:11:09.104  "copy": true,
00:11:09.104  "nvme_iov_md": false
00:11:09.104  },
00:11:09.104  "memory_domains": [
00:11:09.104  {
00:11:09.104  "dma_device_id": "system",
00:11:09.104  "dma_device_type": 1
00:11:09.104  },
00:11:09.104  {
00:11:09.104  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:09.104  "dma_device_type": 2
00:11:09.104  }
00:11:09.104  ],
00:11:09.104  "driver_specific": {}
00:11:09.104  }
00:11:09.104  ]
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:09.104   11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:09.104    11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:09.104    11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:09.104    11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.104    11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.104    11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.104   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:09.104    "name": "Existed_Raid",
00:11:09.104    "uuid": "1b78221a-d9db-4b8f-baa9-c296549c8b91",
00:11:09.104    "strip_size_kb": 64,
00:11:09.104    "state": "configuring",
00:11:09.104    "raid_level": "raid0",
00:11:09.104    "superblock": true,
00:11:09.104    "num_base_bdevs": 4,
00:11:09.104    "num_base_bdevs_discovered": 1,
00:11:09.104    "num_base_bdevs_operational": 4,
00:11:09.104    "base_bdevs_list": [
00:11:09.104      {
00:11:09.104        "name": "BaseBdev1",
00:11:09.104        "uuid": "30dbf86a-78f8-4f15-976d-514f5a49bc91",
00:11:09.104        "is_configured": true,
00:11:09.104        "data_offset": 2048,
00:11:09.104        "data_size": 63488
00:11:09.104      },
00:11:09.104      {
00:11:09.104        "name": "BaseBdev2",
00:11:09.104        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:09.104        "is_configured": false,
00:11:09.104        "data_offset": 0,
00:11:09.104        "data_size": 0
00:11:09.104      },
00:11:09.104      {
00:11:09.104        "name": "BaseBdev3",
00:11:09.104        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:09.104        "is_configured": false,
00:11:09.104        "data_offset": 0,
00:11:09.104        "data_size": 0
00:11:09.104      },
00:11:09.104      {
00:11:09.104        "name": "BaseBdev4",
00:11:09.104        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:09.104        "is_configured": false,
00:11:09.104        "data_offset": 0,
00:11:09.104        "data_size": 0
00:11:09.104      }
00:11:09.104    ]
00:11:09.104  }'
00:11:09.104   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:09.104   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.363   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:09.363   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.363   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.363  [2024-12-16 11:32:35.424502] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:09.363  [2024-12-16 11:32:35.424590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:11:09.621   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.622  [2024-12-16 11:32:35.436576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:09.622  [2024-12-16 11:32:35.438506] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:09.622  [2024-12-16 11:32:35.438559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:09.622  [2024-12-16 11:32:35.438570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:09.622  [2024-12-16 11:32:35.438596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:09.622  [2024-12-16 11:32:35.438603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:09.622  [2024-12-16 11:32:35.438612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:09.622    11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:09.622    11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.622    11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.622    11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:09.622    11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:09.622    "name": "Existed_Raid",
00:11:09.622    "uuid": "eb3e66e5-daa1-43b0-803a-13238e582337",
00:11:09.622    "strip_size_kb": 64,
00:11:09.622    "state": "configuring",
00:11:09.622    "raid_level": "raid0",
00:11:09.622    "superblock": true,
00:11:09.622    "num_base_bdevs": 4,
00:11:09.622    "num_base_bdevs_discovered": 1,
00:11:09.622    "num_base_bdevs_operational": 4,
00:11:09.622    "base_bdevs_list": [
00:11:09.622      {
00:11:09.622        "name": "BaseBdev1",
00:11:09.622        "uuid": "30dbf86a-78f8-4f15-976d-514f5a49bc91",
00:11:09.622        "is_configured": true,
00:11:09.622        "data_offset": 2048,
00:11:09.622        "data_size": 63488
00:11:09.622      },
00:11:09.622      {
00:11:09.622        "name": "BaseBdev2",
00:11:09.622        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:09.622        "is_configured": false,
00:11:09.622        "data_offset": 0,
00:11:09.622        "data_size": 0
00:11:09.622      },
00:11:09.622      {
00:11:09.622        "name": "BaseBdev3",
00:11:09.622        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:09.622        "is_configured": false,
00:11:09.622        "data_offset": 0,
00:11:09.622        "data_size": 0
00:11:09.622      },
00:11:09.622      {
00:11:09.622        "name": "BaseBdev4",
00:11:09.622        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:09.622        "is_configured": false,
00:11:09.622        "data_offset": 0,
00:11:09.622        "data_size": 0
00:11:09.622      }
00:11:09.622    ]
00:11:09.622  }'
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:09.622   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.881  [2024-12-16 11:32:35.901188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:09.881  BaseBdev2
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:09.881  [
00:11:09.881  {
00:11:09.881  "name": "BaseBdev2",
00:11:09.881  "aliases": [
00:11:09.881  "965322b0-1ac6-4739-ad78-0fa5c9a1f473"
00:11:09.881  ],
00:11:09.881  "product_name": "Malloc disk",
00:11:09.881  "block_size": 512,
00:11:09.881  "num_blocks": 65536,
00:11:09.881  "uuid": "965322b0-1ac6-4739-ad78-0fa5c9a1f473",
00:11:09.881  "assigned_rate_limits": {
00:11:09.881  "rw_ios_per_sec": 0,
00:11:09.881  "rw_mbytes_per_sec": 0,
00:11:09.881  "r_mbytes_per_sec": 0,
00:11:09.881  "w_mbytes_per_sec": 0
00:11:09.881  },
00:11:09.881  "claimed": true,
00:11:09.881  "claim_type": "exclusive_write",
00:11:09.881  "zoned": false,
00:11:09.881  "supported_io_types": {
00:11:09.881  "read": true,
00:11:09.881  "write": true,
00:11:09.881  "unmap": true,
00:11:09.881  "flush": true,
00:11:09.881  "reset": true,
00:11:09.881  "nvme_admin": false,
00:11:09.881  "nvme_io": false,
00:11:09.881  "nvme_io_md": false,
00:11:09.881  "write_zeroes": true,
00:11:09.881  "zcopy": true,
00:11:09.881  "get_zone_info": false,
00:11:09.881  "zone_management": false,
00:11:09.881  "zone_append": false,
00:11:09.881  "compare": false,
00:11:09.881  "compare_and_write": false,
00:11:09.881  "abort": true,
00:11:09.881  "seek_hole": false,
00:11:09.881  "seek_data": false,
00:11:09.881  "copy": true,
00:11:09.881  "nvme_iov_md": false
00:11:09.881  },
00:11:09.881  "memory_domains": [
00:11:09.881  {
00:11:09.881  "dma_device_id": "system",
00:11:09.881  "dma_device_type": 1
00:11:09.881  },
00:11:09.881  {
00:11:09.881  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:09.881  "dma_device_type": 2
00:11:09.881  }
00:11:09.881  ],
00:11:09.881  "driver_specific": {}
00:11:09.881  }
00:11:09.881  ]
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:09.881   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:09.881    11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:09.881    11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:09.882    11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:09.882    11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.141    11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.141   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:10.141    "name": "Existed_Raid",
00:11:10.142    "uuid": "eb3e66e5-daa1-43b0-803a-13238e582337",
00:11:10.142    "strip_size_kb": 64,
00:11:10.142    "state": "configuring",
00:11:10.142    "raid_level": "raid0",
00:11:10.142    "superblock": true,
00:11:10.142    "num_base_bdevs": 4,
00:11:10.142    "num_base_bdevs_discovered": 2,
00:11:10.142    "num_base_bdevs_operational": 4,
00:11:10.142    "base_bdevs_list": [
00:11:10.142      {
00:11:10.142        "name": "BaseBdev1",
00:11:10.142        "uuid": "30dbf86a-78f8-4f15-976d-514f5a49bc91",
00:11:10.142        "is_configured": true,
00:11:10.142        "data_offset": 2048,
00:11:10.142        "data_size": 63488
00:11:10.142      },
00:11:10.142      {
00:11:10.142        "name": "BaseBdev2",
00:11:10.142        "uuid": "965322b0-1ac6-4739-ad78-0fa5c9a1f473",
00:11:10.142        "is_configured": true,
00:11:10.142        "data_offset": 2048,
00:11:10.142        "data_size": 63488
00:11:10.142      },
00:11:10.142      {
00:11:10.142        "name": "BaseBdev3",
00:11:10.142        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:10.142        "is_configured": false,
00:11:10.142        "data_offset": 0,
00:11:10.142        "data_size": 0
00:11:10.142      },
00:11:10.142      {
00:11:10.142        "name": "BaseBdev4",
00:11:10.142        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:10.142        "is_configured": false,
00:11:10.142        "data_offset": 0,
00:11:10.142        "data_size": 0
00:11:10.142      }
00:11:10.142    ]
00:11:10.142  }'
00:11:10.142   11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:10.142   11:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.403  [2024-12-16 11:32:36.371633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:10.403  BaseBdev3
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:10.403   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.404  [
00:11:10.404  {
00:11:10.404  "name": "BaseBdev3",
00:11:10.404  "aliases": [
00:11:10.404  "1ac40f0b-971e-4557-bb59-f839184e7d9e"
00:11:10.404  ],
00:11:10.404  "product_name": "Malloc disk",
00:11:10.404  "block_size": 512,
00:11:10.404  "num_blocks": 65536,
00:11:10.404  "uuid": "1ac40f0b-971e-4557-bb59-f839184e7d9e",
00:11:10.404  "assigned_rate_limits": {
00:11:10.404  "rw_ios_per_sec": 0,
00:11:10.404  "rw_mbytes_per_sec": 0,
00:11:10.404  "r_mbytes_per_sec": 0,
00:11:10.404  "w_mbytes_per_sec": 0
00:11:10.404  },
00:11:10.404  "claimed": true,
00:11:10.404  "claim_type": "exclusive_write",
00:11:10.404  "zoned": false,
00:11:10.404  "supported_io_types": {
00:11:10.404  "read": true,
00:11:10.404  "write": true,
00:11:10.404  "unmap": true,
00:11:10.404  "flush": true,
00:11:10.404  "reset": true,
00:11:10.404  "nvme_admin": false,
00:11:10.404  "nvme_io": false,
00:11:10.404  "nvme_io_md": false,
00:11:10.404  "write_zeroes": true,
00:11:10.404  "zcopy": true,
00:11:10.404  "get_zone_info": false,
00:11:10.404  "zone_management": false,
00:11:10.404  "zone_append": false,
00:11:10.404  "compare": false,
00:11:10.404  "compare_and_write": false,
00:11:10.404  "abort": true,
00:11:10.404  "seek_hole": false,
00:11:10.404  "seek_data": false,
00:11:10.404  "copy": true,
00:11:10.404  "nvme_iov_md": false
00:11:10.404  },
00:11:10.404  "memory_domains": [
00:11:10.404  {
00:11:10.404  "dma_device_id": "system",
00:11:10.404  "dma_device_type": 1
00:11:10.404  },
00:11:10.404  {
00:11:10.404  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:10.404  "dma_device_type": 2
00:11:10.404  }
00:11:10.404  ],
00:11:10.404  "driver_specific": {}
00:11:10.404  }
00:11:10.404  ]
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:10.404   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:10.404    11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:10.404    11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:10.404    11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.404    11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.404    11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.405   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:10.405    "name": "Existed_Raid",
00:11:10.405    "uuid": "eb3e66e5-daa1-43b0-803a-13238e582337",
00:11:10.405    "strip_size_kb": 64,
00:11:10.405    "state": "configuring",
00:11:10.405    "raid_level": "raid0",
00:11:10.405    "superblock": true,
00:11:10.405    "num_base_bdevs": 4,
00:11:10.405    "num_base_bdevs_discovered": 3,
00:11:10.405    "num_base_bdevs_operational": 4,
00:11:10.405    "base_bdevs_list": [
00:11:10.405      {
00:11:10.405        "name": "BaseBdev1",
00:11:10.405        "uuid": "30dbf86a-78f8-4f15-976d-514f5a49bc91",
00:11:10.405        "is_configured": true,
00:11:10.405        "data_offset": 2048,
00:11:10.405        "data_size": 63488
00:11:10.405      },
00:11:10.405      {
00:11:10.405        "name": "BaseBdev2",
00:11:10.405        "uuid": "965322b0-1ac6-4739-ad78-0fa5c9a1f473",
00:11:10.405        "is_configured": true,
00:11:10.405        "data_offset": 2048,
00:11:10.405        "data_size": 63488
00:11:10.405      },
00:11:10.405      {
00:11:10.405        "name": "BaseBdev3",
00:11:10.405        "uuid": "1ac40f0b-971e-4557-bb59-f839184e7d9e",
00:11:10.405        "is_configured": true,
00:11:10.405        "data_offset": 2048,
00:11:10.405        "data_size": 63488
00:11:10.405      },
00:11:10.405      {
00:11:10.405        "name": "BaseBdev4",
00:11:10.405        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:10.405        "is_configured": false,
00:11:10.405        "data_offset": 0,
00:11:10.405        "data_size": 0
00:11:10.405      }
00:11:10.405    ]
00:11:10.405  }'
00:11:10.664   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:10.664   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.923  BaseBdev4
00:11:10.923  [2024-12-16 11:32:36.901891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:10.923  [2024-12-16 11:32:36.902104] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:11:10.923  [2024-12-16 11:32:36.902128] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:10.923  [2024-12-16 11:32:36.902428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:10.923  [2024-12-16 11:32:36.902562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:11:10.923  [2024-12-16 11:32:36.902576] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:11:10.923  [2024-12-16 11:32:36.902712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.923   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.923  [
00:11:10.923  {
00:11:10.923  "name": "BaseBdev4",
00:11:10.923  "aliases": [
00:11:10.923  "97206d88-0d82-41da-83df-17f899d407bc"
00:11:10.923  ],
00:11:10.923  "product_name": "Malloc disk",
00:11:10.923  "block_size": 512,
00:11:10.923  "num_blocks": 65536,
00:11:10.923  "uuid": "97206d88-0d82-41da-83df-17f899d407bc",
00:11:10.923  "assigned_rate_limits": {
00:11:10.923  "rw_ios_per_sec": 0,
00:11:10.923  "rw_mbytes_per_sec": 0,
00:11:10.923  "r_mbytes_per_sec": 0,
00:11:10.923  "w_mbytes_per_sec": 0
00:11:10.923  },
00:11:10.923  "claimed": true,
00:11:10.923  "claim_type": "exclusive_write",
00:11:10.923  "zoned": false,
00:11:10.923  "supported_io_types": {
00:11:10.923  "read": true,
00:11:10.923  "write": true,
00:11:10.923  "unmap": true,
00:11:10.923  "flush": true,
00:11:10.924  "reset": true,
00:11:10.924  "nvme_admin": false,
00:11:10.924  "nvme_io": false,
00:11:10.924  "nvme_io_md": false,
00:11:10.924  "write_zeroes": true,
00:11:10.924  "zcopy": true,
00:11:10.924  "get_zone_info": false,
00:11:10.924  "zone_management": false,
00:11:10.924  "zone_append": false,
00:11:10.924  "compare": false,
00:11:10.924  "compare_and_write": false,
00:11:10.924  "abort": true,
00:11:10.924  "seek_hole": false,
00:11:10.924  "seek_data": false,
00:11:10.924  "copy": true,
00:11:10.924  "nvme_iov_md": false
00:11:10.924  },
00:11:10.924  "memory_domains": [
00:11:10.924  {
00:11:10.924  "dma_device_id": "system",
00:11:10.924  "dma_device_type": 1
00:11:10.924  },
00:11:10.924  {
00:11:10.924  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:10.924  "dma_device_type": 2
00:11:10.924  }
00:11:10.924  ],
00:11:10.924  "driver_specific": {}
00:11:10.924  }
00:11:10.924  ]
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:10.924   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:10.924    11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:10.924    11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:10.924    11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:10.924    11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:10.924    11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.184   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:11.184    "name": "Existed_Raid",
00:11:11.184    "uuid": "eb3e66e5-daa1-43b0-803a-13238e582337",
00:11:11.184    "strip_size_kb": 64,
00:11:11.184    "state": "online",
00:11:11.184    "raid_level": "raid0",
00:11:11.184    "superblock": true,
00:11:11.184    "num_base_bdevs": 4,
00:11:11.184    "num_base_bdevs_discovered": 4,
00:11:11.184    "num_base_bdevs_operational": 4,
00:11:11.184    "base_bdevs_list": [
00:11:11.184      {
00:11:11.184        "name": "BaseBdev1",
00:11:11.184        "uuid": "30dbf86a-78f8-4f15-976d-514f5a49bc91",
00:11:11.184        "is_configured": true,
00:11:11.184        "data_offset": 2048,
00:11:11.184        "data_size": 63488
00:11:11.184      },
00:11:11.184      {
00:11:11.184        "name": "BaseBdev2",
00:11:11.184        "uuid": "965322b0-1ac6-4739-ad78-0fa5c9a1f473",
00:11:11.184        "is_configured": true,
00:11:11.184        "data_offset": 2048,
00:11:11.184        "data_size": 63488
00:11:11.184      },
00:11:11.184      {
00:11:11.184        "name": "BaseBdev3",
00:11:11.184        "uuid": "1ac40f0b-971e-4557-bb59-f839184e7d9e",
00:11:11.184        "is_configured": true,
00:11:11.184        "data_offset": 2048,
00:11:11.184        "data_size": 63488
00:11:11.184      },
00:11:11.184      {
00:11:11.184        "name": "BaseBdev4",
00:11:11.184        "uuid": "97206d88-0d82-41da-83df-17f899d407bc",
00:11:11.184        "is_configured": true,
00:11:11.184        "data_offset": 2048,
00:11:11.184        "data_size": 63488
00:11:11.184      }
00:11:11.184    ]
00:11:11.184  }'
00:11:11.184   11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:11.184   11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.444   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:11:11.444   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:11.444   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:11.444   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:11.444   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:11:11.444   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:11.444    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:11.444    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:11.444    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.444    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.444  [2024-12-16 11:32:37.445391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:11.444    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.444   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:11.444    "name": "Existed_Raid",
00:11:11.444    "aliases": [
00:11:11.444      "eb3e66e5-daa1-43b0-803a-13238e582337"
00:11:11.444    ],
00:11:11.444    "product_name": "Raid Volume",
00:11:11.444    "block_size": 512,
00:11:11.444    "num_blocks": 253952,
00:11:11.444    "uuid": "eb3e66e5-daa1-43b0-803a-13238e582337",
00:11:11.444    "assigned_rate_limits": {
00:11:11.444      "rw_ios_per_sec": 0,
00:11:11.444      "rw_mbytes_per_sec": 0,
00:11:11.444      "r_mbytes_per_sec": 0,
00:11:11.444      "w_mbytes_per_sec": 0
00:11:11.444    },
00:11:11.444    "claimed": false,
00:11:11.444    "zoned": false,
00:11:11.444    "supported_io_types": {
00:11:11.444      "read": true,
00:11:11.444      "write": true,
00:11:11.444      "unmap": true,
00:11:11.444      "flush": true,
00:11:11.444      "reset": true,
00:11:11.444      "nvme_admin": false,
00:11:11.444      "nvme_io": false,
00:11:11.444      "nvme_io_md": false,
00:11:11.444      "write_zeroes": true,
00:11:11.444      "zcopy": false,
00:11:11.444      "get_zone_info": false,
00:11:11.444      "zone_management": false,
00:11:11.444      "zone_append": false,
00:11:11.444      "compare": false,
00:11:11.444      "compare_and_write": false,
00:11:11.444      "abort": false,
00:11:11.444      "seek_hole": false,
00:11:11.444      "seek_data": false,
00:11:11.444      "copy": false,
00:11:11.444      "nvme_iov_md": false
00:11:11.444    },
00:11:11.444    "memory_domains": [
00:11:11.444      {
00:11:11.444        "dma_device_id": "system",
00:11:11.444        "dma_device_type": 1
00:11:11.444      },
00:11:11.444      {
00:11:11.444        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:11.444        "dma_device_type": 2
00:11:11.444      },
00:11:11.444      {
00:11:11.444        "dma_device_id": "system",
00:11:11.444        "dma_device_type": 1
00:11:11.444      },
00:11:11.444      {
00:11:11.444        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:11.444        "dma_device_type": 2
00:11:11.444      },
00:11:11.444      {
00:11:11.444        "dma_device_id": "system",
00:11:11.444        "dma_device_type": 1
00:11:11.444      },
00:11:11.444      {
00:11:11.444        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:11.444        "dma_device_type": 2
00:11:11.444      },
00:11:11.444      {
00:11:11.444        "dma_device_id": "system",
00:11:11.444        "dma_device_type": 1
00:11:11.444      },
00:11:11.444      {
00:11:11.444        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:11.444        "dma_device_type": 2
00:11:11.444      }
00:11:11.444    ],
00:11:11.444    "driver_specific": {
00:11:11.444      "raid": {
00:11:11.444        "uuid": "eb3e66e5-daa1-43b0-803a-13238e582337",
00:11:11.444        "strip_size_kb": 64,
00:11:11.444        "state": "online",
00:11:11.444        "raid_level": "raid0",
00:11:11.444        "superblock": true,
00:11:11.444        "num_base_bdevs": 4,
00:11:11.444        "num_base_bdevs_discovered": 4,
00:11:11.444        "num_base_bdevs_operational": 4,
00:11:11.444        "base_bdevs_list": [
00:11:11.444          {
00:11:11.444            "name": "BaseBdev1",
00:11:11.444            "uuid": "30dbf86a-78f8-4f15-976d-514f5a49bc91",
00:11:11.444            "is_configured": true,
00:11:11.444            "data_offset": 2048,
00:11:11.444            "data_size": 63488
00:11:11.444          },
00:11:11.444          {
00:11:11.444            "name": "BaseBdev2",
00:11:11.444            "uuid": "965322b0-1ac6-4739-ad78-0fa5c9a1f473",
00:11:11.444            "is_configured": true,
00:11:11.444            "data_offset": 2048,
00:11:11.444            "data_size": 63488
00:11:11.444          },
00:11:11.444          {
00:11:11.444            "name": "BaseBdev3",
00:11:11.444            "uuid": "1ac40f0b-971e-4557-bb59-f839184e7d9e",
00:11:11.444            "is_configured": true,
00:11:11.444            "data_offset": 2048,
00:11:11.444            "data_size": 63488
00:11:11.444          },
00:11:11.444          {
00:11:11.444            "name": "BaseBdev4",
00:11:11.444            "uuid": "97206d88-0d82-41da-83df-17f899d407bc",
00:11:11.444            "is_configured": true,
00:11:11.444            "data_offset": 2048,
00:11:11.444            "data_size": 63488
00:11:11.444          }
00:11:11.444        ]
00:11:11.444      }
00:11:11.444    }
00:11:11.444  }'
00:11:11.444    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:11:11.704  BaseBdev2
00:11:11.704  BaseBdev3
00:11:11.704  BaseBdev4'
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:11.704    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.704   11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.704  [2024-12-16 11:32:37.768521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:11.704  [2024-12-16 11:32:37.768615] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:11.704  [2024-12-16 11:32:37.768718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:11.964    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:11.964    11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:11.964    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:11.964    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:11.964    11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:11.964    "name": "Existed_Raid",
00:11:11.964    "uuid": "eb3e66e5-daa1-43b0-803a-13238e582337",
00:11:11.964    "strip_size_kb": 64,
00:11:11.964    "state": "offline",
00:11:11.964    "raid_level": "raid0",
00:11:11.964    "superblock": true,
00:11:11.964    "num_base_bdevs": 4,
00:11:11.964    "num_base_bdevs_discovered": 3,
00:11:11.964    "num_base_bdevs_operational": 3,
00:11:11.964    "base_bdevs_list": [
00:11:11.964      {
00:11:11.964        "name": null,
00:11:11.964        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:11.964        "is_configured": false,
00:11:11.964        "data_offset": 0,
00:11:11.964        "data_size": 63488
00:11:11.964      },
00:11:11.964      {
00:11:11.964        "name": "BaseBdev2",
00:11:11.964        "uuid": "965322b0-1ac6-4739-ad78-0fa5c9a1f473",
00:11:11.964        "is_configured": true,
00:11:11.964        "data_offset": 2048,
00:11:11.964        "data_size": 63488
00:11:11.964      },
00:11:11.964      {
00:11:11.964        "name": "BaseBdev3",
00:11:11.964        "uuid": "1ac40f0b-971e-4557-bb59-f839184e7d9e",
00:11:11.964        "is_configured": true,
00:11:11.964        "data_offset": 2048,
00:11:11.964        "data_size": 63488
00:11:11.964      },
00:11:11.964      {
00:11:11.964        "name": "BaseBdev4",
00:11:11.964        "uuid": "97206d88-0d82-41da-83df-17f899d407bc",
00:11:11.964        "is_configured": true,
00:11:11.964        "data_offset": 2048,
00:11:11.964        "data_size": 63488
00:11:11.964      }
00:11:11.964    ]
00:11:11.964  }'
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:11.964   11:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.224   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:11:12.224   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:12.224    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:12.224    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.224    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.224    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:12.224    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485  [2024-12-16 11:32:38.303291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485  [2024-12-16 11:32:38.374879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485  [2024-12-16 11:32:38.446395] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:11:12.485  [2024-12-16 11:32:38.446522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485  BaseBdev2
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.485   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.745  [
00:11:12.745  {
00:11:12.745  "name": "BaseBdev2",
00:11:12.745  "aliases": [
00:11:12.745  "53dcec7e-2734-468c-b526-5d5f3091baf5"
00:11:12.745  ],
00:11:12.745  "product_name": "Malloc disk",
00:11:12.745  "block_size": 512,
00:11:12.745  "num_blocks": 65536,
00:11:12.745  "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:12.745  "assigned_rate_limits": {
00:11:12.745  "rw_ios_per_sec": 0,
00:11:12.745  "rw_mbytes_per_sec": 0,
00:11:12.745  "r_mbytes_per_sec": 0,
00:11:12.745  "w_mbytes_per_sec": 0
00:11:12.745  },
00:11:12.745  "claimed": false,
00:11:12.745  "zoned": false,
00:11:12.745  "supported_io_types": {
00:11:12.745  "read": true,
00:11:12.745  "write": true,
00:11:12.745  "unmap": true,
00:11:12.745  "flush": true,
00:11:12.745  "reset": true,
00:11:12.745  "nvme_admin": false,
00:11:12.745  "nvme_io": false,
00:11:12.745  "nvme_io_md": false,
00:11:12.745  "write_zeroes": true,
00:11:12.745  "zcopy": true,
00:11:12.745  "get_zone_info": false,
00:11:12.745  "zone_management": false,
00:11:12.745  "zone_append": false,
00:11:12.745  "compare": false,
00:11:12.745  "compare_and_write": false,
00:11:12.745  "abort": true,
00:11:12.745  "seek_hole": false,
00:11:12.745  "seek_data": false,
00:11:12.745  "copy": true,
00:11:12.745  "nvme_iov_md": false
00:11:12.745  },
00:11:12.745  "memory_domains": [
00:11:12.745  {
00:11:12.745  "dma_device_id": "system",
00:11:12.745  "dma_device_type": 1
00:11:12.745  },
00:11:12.745  {
00:11:12.745  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:12.745  "dma_device_type": 2
00:11:12.745  }
00:11:12.745  ],
00:11:12.745  "driver_specific": {}
00:11:12.745  }
00:11:12.745  ]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.745  BaseBdev3
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.745  [
00:11:12.745  {
00:11:12.745  "name": "BaseBdev3",
00:11:12.745  "aliases": [
00:11:12.745  "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b"
00:11:12.745  ],
00:11:12.745  "product_name": "Malloc disk",
00:11:12.745  "block_size": 512,
00:11:12.745  "num_blocks": 65536,
00:11:12.745  "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:12.745  "assigned_rate_limits": {
00:11:12.745  "rw_ios_per_sec": 0,
00:11:12.745  "rw_mbytes_per_sec": 0,
00:11:12.745  "r_mbytes_per_sec": 0,
00:11:12.745  "w_mbytes_per_sec": 0
00:11:12.745  },
00:11:12.745  "claimed": false,
00:11:12.745  "zoned": false,
00:11:12.745  "supported_io_types": {
00:11:12.745  "read": true,
00:11:12.745  "write": true,
00:11:12.745  "unmap": true,
00:11:12.745  "flush": true,
00:11:12.745  "reset": true,
00:11:12.745  "nvme_admin": false,
00:11:12.745  "nvme_io": false,
00:11:12.745  "nvme_io_md": false,
00:11:12.745  "write_zeroes": true,
00:11:12.745  "zcopy": true,
00:11:12.745  "get_zone_info": false,
00:11:12.745  "zone_management": false,
00:11:12.745  "zone_append": false,
00:11:12.745  "compare": false,
00:11:12.745  "compare_and_write": false,
00:11:12.745  "abort": true,
00:11:12.745  "seek_hole": false,
00:11:12.745  "seek_data": false,
00:11:12.745  "copy": true,
00:11:12.745  "nvme_iov_md": false
00:11:12.745  },
00:11:12.745  "memory_domains": [
00:11:12.745  {
00:11:12.745  "dma_device_id": "system",
00:11:12.745  "dma_device_type": 1
00:11:12.745  },
00:11:12.745  {
00:11:12.745  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:12.745  "dma_device_type": 2
00:11:12.745  }
00:11:12.745  ],
00:11:12.745  "driver_specific": {}
00:11:12.745  }
00:11:12.745  ]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.745  BaseBdev4
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.745   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.745  [
00:11:12.745  {
00:11:12.745  "name": "BaseBdev4",
00:11:12.745  "aliases": [
00:11:12.745  "2892f945-266c-465f-b4a7-eede4cfd9d19"
00:11:12.745  ],
00:11:12.745  "product_name": "Malloc disk",
00:11:12.745  "block_size": 512,
00:11:12.745  "num_blocks": 65536,
00:11:12.745  "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:12.745  "assigned_rate_limits": {
00:11:12.745  "rw_ios_per_sec": 0,
00:11:12.745  "rw_mbytes_per_sec": 0,
00:11:12.745  "r_mbytes_per_sec": 0,
00:11:12.745  "w_mbytes_per_sec": 0
00:11:12.745  },
00:11:12.745  "claimed": false,
00:11:12.745  "zoned": false,
00:11:12.745  "supported_io_types": {
00:11:12.745  "read": true,
00:11:12.745  "write": true,
00:11:12.745  "unmap": true,
00:11:12.745  "flush": true,
00:11:12.745  "reset": true,
00:11:12.745  "nvme_admin": false,
00:11:12.745  "nvme_io": false,
00:11:12.745  "nvme_io_md": false,
00:11:12.745  "write_zeroes": true,
00:11:12.745  "zcopy": true,
00:11:12.745  "get_zone_info": false,
00:11:12.745  "zone_management": false,
00:11:12.745  "zone_append": false,
00:11:12.745  "compare": false,
00:11:12.745  "compare_and_write": false,
00:11:12.745  "abort": true,
00:11:12.745  "seek_hole": false,
00:11:12.745  "seek_data": false,
00:11:12.745  "copy": true,
00:11:12.745  "nvme_iov_md": false
00:11:12.745  },
00:11:12.745  "memory_domains": [
00:11:12.745  {
00:11:12.745  "dma_device_id": "system",
00:11:12.745  "dma_device_type": 1
00:11:12.745  },
00:11:12.745  {
00:11:12.745  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:12.745  "dma_device_type": 2
00:11:12.745  }
00:11:12.745  ],
00:11:12.745  "driver_specific": {}
00:11:12.745  }
00:11:12.745  ]
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.746  [2024-12-16 11:32:38.680437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:12.746  [2024-12-16 11:32:38.680573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:12.746  [2024-12-16 11:32:38.680631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:12.746  [2024-12-16 11:32:38.682751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:12.746  [2024-12-16 11:32:38.682855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:12.746    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:12.746    11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:12.746    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:12.746    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:12.746    11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:12.746    "name": "Existed_Raid",
00:11:12.746    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:12.746    "strip_size_kb": 64,
00:11:12.746    "state": "configuring",
00:11:12.746    "raid_level": "raid0",
00:11:12.746    "superblock": true,
00:11:12.746    "num_base_bdevs": 4,
00:11:12.746    "num_base_bdevs_discovered": 3,
00:11:12.746    "num_base_bdevs_operational": 4,
00:11:12.746    "base_bdevs_list": [
00:11:12.746      {
00:11:12.746        "name": "BaseBdev1",
00:11:12.746        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:12.746        "is_configured": false,
00:11:12.746        "data_offset": 0,
00:11:12.746        "data_size": 0
00:11:12.746      },
00:11:12.746      {
00:11:12.746        "name": "BaseBdev2",
00:11:12.746        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:12.746        "is_configured": true,
00:11:12.746        "data_offset": 2048,
00:11:12.746        "data_size": 63488
00:11:12.746      },
00:11:12.746      {
00:11:12.746        "name": "BaseBdev3",
00:11:12.746        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:12.746        "is_configured": true,
00:11:12.746        "data_offset": 2048,
00:11:12.746        "data_size": 63488
00:11:12.746      },
00:11:12.746      {
00:11:12.746        "name": "BaseBdev4",
00:11:12.746        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:12.746        "is_configured": true,
00:11:12.746        "data_offset": 2048,
00:11:12.746        "data_size": 63488
00:11:12.746      }
00:11:12.746    ]
00:11:12.746  }'
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:12.746   11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.333  [2024-12-16 11:32:39.127652] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:13.333    11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:13.333    11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:13.333    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.333    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.333    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:13.333    "name": "Existed_Raid",
00:11:13.333    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:13.333    "strip_size_kb": 64,
00:11:13.333    "state": "configuring",
00:11:13.333    "raid_level": "raid0",
00:11:13.333    "superblock": true,
00:11:13.333    "num_base_bdevs": 4,
00:11:13.333    "num_base_bdevs_discovered": 2,
00:11:13.333    "num_base_bdevs_operational": 4,
00:11:13.333    "base_bdevs_list": [
00:11:13.333      {
00:11:13.333        "name": "BaseBdev1",
00:11:13.333        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:13.333        "is_configured": false,
00:11:13.333        "data_offset": 0,
00:11:13.333        "data_size": 0
00:11:13.333      },
00:11:13.333      {
00:11:13.333        "name": null,
00:11:13.333        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:13.333        "is_configured": false,
00:11:13.333        "data_offset": 0,
00:11:13.333        "data_size": 63488
00:11:13.333      },
00:11:13.333      {
00:11:13.333        "name": "BaseBdev3",
00:11:13.333        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:13.333        "is_configured": true,
00:11:13.333        "data_offset": 2048,
00:11:13.333        "data_size": 63488
00:11:13.333      },
00:11:13.333      {
00:11:13.333        "name": "BaseBdev4",
00:11:13.333        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:13.333        "is_configured": true,
00:11:13.333        "data_offset": 2048,
00:11:13.333        "data_size": 63488
00:11:13.333      }
00:11:13.333    ]
00:11:13.333  }'
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:13.333   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.592    11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:13.592    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.592    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.592    11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:13.592    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.592  [2024-12-16 11:32:39.637816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:13.592  BaseBdev1
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.592   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.851  [
00:11:13.851  {
00:11:13.851  "name": "BaseBdev1",
00:11:13.851  "aliases": [
00:11:13.851  "7dbf0dec-f1bc-48a0-83b3-9650474fc362"
00:11:13.851  ],
00:11:13.851  "product_name": "Malloc disk",
00:11:13.851  "block_size": 512,
00:11:13.851  "num_blocks": 65536,
00:11:13.851  "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:13.851  "assigned_rate_limits": {
00:11:13.851  "rw_ios_per_sec": 0,
00:11:13.851  "rw_mbytes_per_sec": 0,
00:11:13.851  "r_mbytes_per_sec": 0,
00:11:13.851  "w_mbytes_per_sec": 0
00:11:13.851  },
00:11:13.851  "claimed": true,
00:11:13.851  "claim_type": "exclusive_write",
00:11:13.851  "zoned": false,
00:11:13.851  "supported_io_types": {
00:11:13.851  "read": true,
00:11:13.851  "write": true,
00:11:13.851  "unmap": true,
00:11:13.851  "flush": true,
00:11:13.851  "reset": true,
00:11:13.851  "nvme_admin": false,
00:11:13.851  "nvme_io": false,
00:11:13.851  "nvme_io_md": false,
00:11:13.851  "write_zeroes": true,
00:11:13.851  "zcopy": true,
00:11:13.851  "get_zone_info": false,
00:11:13.851  "zone_management": false,
00:11:13.851  "zone_append": false,
00:11:13.851  "compare": false,
00:11:13.851  "compare_and_write": false,
00:11:13.851  "abort": true,
00:11:13.851  "seek_hole": false,
00:11:13.851  "seek_data": false,
00:11:13.851  "copy": true,
00:11:13.851  "nvme_iov_md": false
00:11:13.851  },
00:11:13.851  "memory_domains": [
00:11:13.851  {
00:11:13.851  "dma_device_id": "system",
00:11:13.851  "dma_device_type": 1
00:11:13.851  },
00:11:13.851  {
00:11:13.851  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:13.851  "dma_device_type": 2
00:11:13.851  }
00:11:13.851  ],
00:11:13.851  "driver_specific": {}
00:11:13.851  }
00:11:13.851  ]
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:13.851    11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:13.851    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.851    11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:13.851    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:13.851    11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.851   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:13.851    "name": "Existed_Raid",
00:11:13.851    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:13.851    "strip_size_kb": 64,
00:11:13.851    "state": "configuring",
00:11:13.851    "raid_level": "raid0",
00:11:13.851    "superblock": true,
00:11:13.851    "num_base_bdevs": 4,
00:11:13.851    "num_base_bdevs_discovered": 3,
00:11:13.851    "num_base_bdevs_operational": 4,
00:11:13.851    "base_bdevs_list": [
00:11:13.851      {
00:11:13.851        "name": "BaseBdev1",
00:11:13.851        "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:13.851        "is_configured": true,
00:11:13.851        "data_offset": 2048,
00:11:13.851        "data_size": 63488
00:11:13.851      },
00:11:13.851      {
00:11:13.851        "name": null,
00:11:13.851        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:13.851        "is_configured": false,
00:11:13.851        "data_offset": 0,
00:11:13.851        "data_size": 63488
00:11:13.851      },
00:11:13.851      {
00:11:13.851        "name": "BaseBdev3",
00:11:13.851        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:13.851        "is_configured": true,
00:11:13.851        "data_offset": 2048,
00:11:13.851        "data_size": 63488
00:11:13.851      },
00:11:13.851      {
00:11:13.851        "name": "BaseBdev4",
00:11:13.851        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:13.852        "is_configured": true,
00:11:13.852        "data_offset": 2048,
00:11:13.852        "data_size": 63488
00:11:13.852      }
00:11:13.852    ]
00:11:13.852  }'
00:11:13.852   11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:13.852   11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.110    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:14.110    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:14.110    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:14.111    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.111    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.111  [2024-12-16 11:32:40.164993] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:14.111   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:14.370    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:14.370    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:14.370    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.370    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:14.370    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.370   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:14.370    "name": "Existed_Raid",
00:11:14.370    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:14.370    "strip_size_kb": 64,
00:11:14.370    "state": "configuring",
00:11:14.370    "raid_level": "raid0",
00:11:14.370    "superblock": true,
00:11:14.370    "num_base_bdevs": 4,
00:11:14.370    "num_base_bdevs_discovered": 2,
00:11:14.370    "num_base_bdevs_operational": 4,
00:11:14.370    "base_bdevs_list": [
00:11:14.370      {
00:11:14.370        "name": "BaseBdev1",
00:11:14.370        "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:14.370        "is_configured": true,
00:11:14.370        "data_offset": 2048,
00:11:14.370        "data_size": 63488
00:11:14.370      },
00:11:14.370      {
00:11:14.370        "name": null,
00:11:14.370        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:14.370        "is_configured": false,
00:11:14.370        "data_offset": 0,
00:11:14.370        "data_size": 63488
00:11:14.370      },
00:11:14.370      {
00:11:14.370        "name": null,
00:11:14.370        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:14.371        "is_configured": false,
00:11:14.371        "data_offset": 0,
00:11:14.371        "data_size": 63488
00:11:14.371      },
00:11:14.371      {
00:11:14.371        "name": "BaseBdev4",
00:11:14.371        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:14.371        "is_configured": true,
00:11:14.371        "data_offset": 2048,
00:11:14.371        "data_size": 63488
00:11:14.371      }
00:11:14.371    ]
00:11:14.371  }'
00:11:14.371   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:14.371   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.636    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:14.636    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:14.636    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:14.636    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.636    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.636   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:11:14.636   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:11:14.636   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:14.636   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.636  [2024-12-16 11:32:40.632271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:14.636   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.636   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:14.637    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:14.637    11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:14.637    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:14.637    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:14.637    11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:14.637    "name": "Existed_Raid",
00:11:14.637    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:14.637    "strip_size_kb": 64,
00:11:14.637    "state": "configuring",
00:11:14.637    "raid_level": "raid0",
00:11:14.637    "superblock": true,
00:11:14.637    "num_base_bdevs": 4,
00:11:14.637    "num_base_bdevs_discovered": 3,
00:11:14.637    "num_base_bdevs_operational": 4,
00:11:14.637    "base_bdevs_list": [
00:11:14.637      {
00:11:14.637        "name": "BaseBdev1",
00:11:14.637        "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:14.637        "is_configured": true,
00:11:14.637        "data_offset": 2048,
00:11:14.637        "data_size": 63488
00:11:14.637      },
00:11:14.637      {
00:11:14.637        "name": null,
00:11:14.637        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:14.637        "is_configured": false,
00:11:14.637        "data_offset": 0,
00:11:14.637        "data_size": 63488
00:11:14.637      },
00:11:14.637      {
00:11:14.637        "name": "BaseBdev3",
00:11:14.637        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:14.637        "is_configured": true,
00:11:14.637        "data_offset": 2048,
00:11:14.637        "data_size": 63488
00:11:14.637      },
00:11:14.637      {
00:11:14.637        "name": "BaseBdev4",
00:11:14.637        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:14.637        "is_configured": true,
00:11:14.637        "data_offset": 2048,
00:11:14.637        "data_size": 63488
00:11:14.637      }
00:11:14.637    ]
00:11:14.637  }'
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:14.637   11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.206  [2024-12-16 11:32:41.151458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.206    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:15.206    "name": "Existed_Raid",
00:11:15.206    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:15.206    "strip_size_kb": 64,
00:11:15.206    "state": "configuring",
00:11:15.206    "raid_level": "raid0",
00:11:15.206    "superblock": true,
00:11:15.206    "num_base_bdevs": 4,
00:11:15.206    "num_base_bdevs_discovered": 2,
00:11:15.206    "num_base_bdevs_operational": 4,
00:11:15.206    "base_bdevs_list": [
00:11:15.206      {
00:11:15.206        "name": null,
00:11:15.206        "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:15.206        "is_configured": false,
00:11:15.206        "data_offset": 0,
00:11:15.206        "data_size": 63488
00:11:15.206      },
00:11:15.206      {
00:11:15.206        "name": null,
00:11:15.206        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:15.206        "is_configured": false,
00:11:15.206        "data_offset": 0,
00:11:15.206        "data_size": 63488
00:11:15.206      },
00:11:15.206      {
00:11:15.206        "name": "BaseBdev3",
00:11:15.206        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:15.206        "is_configured": true,
00:11:15.206        "data_offset": 2048,
00:11:15.206        "data_size": 63488
00:11:15.206      },
00:11:15.206      {
00:11:15.206        "name": "BaseBdev4",
00:11:15.206        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:15.206        "is_configured": true,
00:11:15.206        "data_offset": 2048,
00:11:15.206        "data_size": 63488
00:11:15.206      }
00:11:15.206    ]
00:11:15.206  }'
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:15.206   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.776  [2024-12-16 11:32:41.673289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:15.776    11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:15.776   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:15.776    "name": "Existed_Raid",
00:11:15.776    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:15.776    "strip_size_kb": 64,
00:11:15.776    "state": "configuring",
00:11:15.776    "raid_level": "raid0",
00:11:15.776    "superblock": true,
00:11:15.776    "num_base_bdevs": 4,
00:11:15.776    "num_base_bdevs_discovered": 3,
00:11:15.776    "num_base_bdevs_operational": 4,
00:11:15.776    "base_bdevs_list": [
00:11:15.776      {
00:11:15.776        "name": null,
00:11:15.776        "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:15.776        "is_configured": false,
00:11:15.776        "data_offset": 0,
00:11:15.776        "data_size": 63488
00:11:15.776      },
00:11:15.776      {
00:11:15.776        "name": "BaseBdev2",
00:11:15.776        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:15.776        "is_configured": true,
00:11:15.777        "data_offset": 2048,
00:11:15.777        "data_size": 63488
00:11:15.777      },
00:11:15.777      {
00:11:15.777        "name": "BaseBdev3",
00:11:15.777        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:15.777        "is_configured": true,
00:11:15.777        "data_offset": 2048,
00:11:15.777        "data_size": 63488
00:11:15.777      },
00:11:15.777      {
00:11:15.777        "name": "BaseBdev4",
00:11:15.777        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:15.777        "is_configured": true,
00:11:15.777        "data_offset": 2048,
00:11:15.777        "data_size": 63488
00:11:15.777      }
00:11:15.777    ]
00:11:15.777  }'
00:11:15.777   11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:15.777   11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:11:16.347    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7dbf0dec-f1bc-48a0-83b3-9650474fc362
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.347  [2024-12-16 11:32:42.216762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:11:16.347  [2024-12-16 11:32:42.217191] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:11:16.347  [2024-12-16 11:32:42.217267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:16.347  NewBaseBdev
00:11:16.347  [2024-12-16 11:32:42.217633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:11:16.347  [2024-12-16 11:32:42.217804] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:11:16.347  [2024-12-16 11:32:42.217828] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:11:16.347  [2024-12-16 11:32:42.217962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.347   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.347  [
00:11:16.347  {
00:11:16.347  "name": "NewBaseBdev",
00:11:16.347  "aliases": [
00:11:16.347  "7dbf0dec-f1bc-48a0-83b3-9650474fc362"
00:11:16.347  ],
00:11:16.347  "product_name": "Malloc disk",
00:11:16.347  "block_size": 512,
00:11:16.347  "num_blocks": 65536,
00:11:16.347  "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:16.347  "assigned_rate_limits": {
00:11:16.347  "rw_ios_per_sec": 0,
00:11:16.347  "rw_mbytes_per_sec": 0,
00:11:16.347  "r_mbytes_per_sec": 0,
00:11:16.347  "w_mbytes_per_sec": 0
00:11:16.347  },
00:11:16.347  "claimed": true,
00:11:16.347  "claim_type": "exclusive_write",
00:11:16.347  "zoned": false,
00:11:16.347  "supported_io_types": {
00:11:16.347  "read": true,
00:11:16.347  "write": true,
00:11:16.347  "unmap": true,
00:11:16.347  "flush": true,
00:11:16.347  "reset": true,
00:11:16.347  "nvme_admin": false,
00:11:16.347  "nvme_io": false,
00:11:16.347  "nvme_io_md": false,
00:11:16.348  "write_zeroes": true,
00:11:16.348  "zcopy": true,
00:11:16.348  "get_zone_info": false,
00:11:16.348  "zone_management": false,
00:11:16.348  "zone_append": false,
00:11:16.348  "compare": false,
00:11:16.348  "compare_and_write": false,
00:11:16.348  "abort": true,
00:11:16.348  "seek_hole": false,
00:11:16.348  "seek_data": false,
00:11:16.348  "copy": true,
00:11:16.348  "nvme_iov_md": false
00:11:16.348  },
00:11:16.348  "memory_domains": [
00:11:16.348  {
00:11:16.348  "dma_device_id": "system",
00:11:16.348  "dma_device_type": 1
00:11:16.348  },
00:11:16.348  {
00:11:16.348  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:16.348  "dma_device_type": 2
00:11:16.348  }
00:11:16.348  ],
00:11:16.348  "driver_specific": {}
00:11:16.348  }
00:11:16.348  ]
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:16.348    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:16.348    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:16.348    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.348    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.348    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:16.348    "name": "Existed_Raid",
00:11:16.348    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:16.348    "strip_size_kb": 64,
00:11:16.348    "state": "online",
00:11:16.348    "raid_level": "raid0",
00:11:16.348    "superblock": true,
00:11:16.348    "num_base_bdevs": 4,
00:11:16.348    "num_base_bdevs_discovered": 4,
00:11:16.348    "num_base_bdevs_operational": 4,
00:11:16.348    "base_bdevs_list": [
00:11:16.348      {
00:11:16.348        "name": "NewBaseBdev",
00:11:16.348        "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:16.348        "is_configured": true,
00:11:16.348        "data_offset": 2048,
00:11:16.348        "data_size": 63488
00:11:16.348      },
00:11:16.348      {
00:11:16.348        "name": "BaseBdev2",
00:11:16.348        "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:16.348        "is_configured": true,
00:11:16.348        "data_offset": 2048,
00:11:16.348        "data_size": 63488
00:11:16.348      },
00:11:16.348      {
00:11:16.348        "name": "BaseBdev3",
00:11:16.348        "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:16.348        "is_configured": true,
00:11:16.348        "data_offset": 2048,
00:11:16.348        "data_size": 63488
00:11:16.348      },
00:11:16.348      {
00:11:16.348        "name": "BaseBdev4",
00:11:16.348        "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:16.348        "is_configured": true,
00:11:16.348        "data_offset": 2048,
00:11:16.348        "data_size": 63488
00:11:16.348      }
00:11:16.348    ]
00:11:16.348  }'
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:16.348   11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.917   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:11:16.917   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:16.917   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:16.917   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:16.917   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:11:16.917   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:16.917    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.918  [2024-12-16 11:32:42.736241] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:16.918    "name": "Existed_Raid",
00:11:16.918    "aliases": [
00:11:16.918      "268f0ba6-c3bd-4907-913f-ef5fe49a0673"
00:11:16.918    ],
00:11:16.918    "product_name": "Raid Volume",
00:11:16.918    "block_size": 512,
00:11:16.918    "num_blocks": 253952,
00:11:16.918    "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:16.918    "assigned_rate_limits": {
00:11:16.918      "rw_ios_per_sec": 0,
00:11:16.918      "rw_mbytes_per_sec": 0,
00:11:16.918      "r_mbytes_per_sec": 0,
00:11:16.918      "w_mbytes_per_sec": 0
00:11:16.918    },
00:11:16.918    "claimed": false,
00:11:16.918    "zoned": false,
00:11:16.918    "supported_io_types": {
00:11:16.918      "read": true,
00:11:16.918      "write": true,
00:11:16.918      "unmap": true,
00:11:16.918      "flush": true,
00:11:16.918      "reset": true,
00:11:16.918      "nvme_admin": false,
00:11:16.918      "nvme_io": false,
00:11:16.918      "nvme_io_md": false,
00:11:16.918      "write_zeroes": true,
00:11:16.918      "zcopy": false,
00:11:16.918      "get_zone_info": false,
00:11:16.918      "zone_management": false,
00:11:16.918      "zone_append": false,
00:11:16.918      "compare": false,
00:11:16.918      "compare_and_write": false,
00:11:16.918      "abort": false,
00:11:16.918      "seek_hole": false,
00:11:16.918      "seek_data": false,
00:11:16.918      "copy": false,
00:11:16.918      "nvme_iov_md": false
00:11:16.918    },
00:11:16.918    "memory_domains": [
00:11:16.918      {
00:11:16.918        "dma_device_id": "system",
00:11:16.918        "dma_device_type": 1
00:11:16.918      },
00:11:16.918      {
00:11:16.918        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:16.918        "dma_device_type": 2
00:11:16.918      },
00:11:16.918      {
00:11:16.918        "dma_device_id": "system",
00:11:16.918        "dma_device_type": 1
00:11:16.918      },
00:11:16.918      {
00:11:16.918        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:16.918        "dma_device_type": 2
00:11:16.918      },
00:11:16.918      {
00:11:16.918        "dma_device_id": "system",
00:11:16.918        "dma_device_type": 1
00:11:16.918      },
00:11:16.918      {
00:11:16.918        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:16.918        "dma_device_type": 2
00:11:16.918      },
00:11:16.918      {
00:11:16.918        "dma_device_id": "system",
00:11:16.918        "dma_device_type": 1
00:11:16.918      },
00:11:16.918      {
00:11:16.918        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:16.918        "dma_device_type": 2
00:11:16.918      }
00:11:16.918    ],
00:11:16.918    "driver_specific": {
00:11:16.918      "raid": {
00:11:16.918        "uuid": "268f0ba6-c3bd-4907-913f-ef5fe49a0673",
00:11:16.918        "strip_size_kb": 64,
00:11:16.918        "state": "online",
00:11:16.918        "raid_level": "raid0",
00:11:16.918        "superblock": true,
00:11:16.918        "num_base_bdevs": 4,
00:11:16.918        "num_base_bdevs_discovered": 4,
00:11:16.918        "num_base_bdevs_operational": 4,
00:11:16.918        "base_bdevs_list": [
00:11:16.918          {
00:11:16.918            "name": "NewBaseBdev",
00:11:16.918            "uuid": "7dbf0dec-f1bc-48a0-83b3-9650474fc362",
00:11:16.918            "is_configured": true,
00:11:16.918            "data_offset": 2048,
00:11:16.918            "data_size": 63488
00:11:16.918          },
00:11:16.918          {
00:11:16.918            "name": "BaseBdev2",
00:11:16.918            "uuid": "53dcec7e-2734-468c-b526-5d5f3091baf5",
00:11:16.918            "is_configured": true,
00:11:16.918            "data_offset": 2048,
00:11:16.918            "data_size": 63488
00:11:16.918          },
00:11:16.918          {
00:11:16.918            "name": "BaseBdev3",
00:11:16.918            "uuid": "ba73e6d6-c4b4-45f6-aa9a-7bda2315225b",
00:11:16.918            "is_configured": true,
00:11:16.918            "data_offset": 2048,
00:11:16.918            "data_size": 63488
00:11:16.918          },
00:11:16.918          {
00:11:16.918            "name": "BaseBdev4",
00:11:16.918            "uuid": "2892f945-266c-465f-b4a7-eede4cfd9d19",
00:11:16.918            "is_configured": true,
00:11:16.918            "data_offset": 2048,
00:11:16.918            "data_size": 63488
00:11:16.918          }
00:11:16.918        ]
00:11:16.918      }
00:11:16.918    }
00:11:16.918  }'
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:11:16.918  BaseBdev2
00:11:16.918  BaseBdev3
00:11:16.918  BaseBdev4'
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:16.918   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.918    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:17.177    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:17.177   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:17.177   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:17.177   11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:17.177    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:17.177    11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:17.177    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:17.177    11:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:17.177    11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:17.177  [2024-12-16 11:32:43.043457] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:17.177  [2024-12-16 11:32:43.043510] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:17.177  [2024-12-16 11:32:43.043666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:17.177  [2024-12-16 11:32:43.043761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:17.177  [2024-12-16 11:32:43.043778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81325
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81325 ']'
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81325
00:11:17.177    11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:17.177    11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81325
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81325'
00:11:17.177  killing process with pid 81325
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81325
00:11:17.177  [2024-12-16 11:32:43.081325] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:17.177   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81325
00:11:17.177  [2024-12-16 11:32:43.124918] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:17.437   11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:11:17.437  
00:11:17.437  real	0m9.902s
00:11:17.437  user	0m16.860s
00:11:17.437  sys	0m2.161s
00:11:17.437   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:17.437   11:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:17.437  ************************************
00:11:17.437  END TEST raid_state_function_test_sb
00:11:17.437  ************************************
00:11:17.437   11:32:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4
00:11:17.437   11:32:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:11:17.437   11:32:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:17.437   11:32:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:17.437  ************************************
00:11:17.437  START TEST raid_superblock_test
00:11:17.437  ************************************
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']'
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81980
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81980
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81980 ']'
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:17.437  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:17.437   11:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:17.695  [2024-12-16 11:32:43.550858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:17.695  [2024-12-16 11:32:43.551117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81980 ]
00:11:17.696  [2024-12-16 11:32:43.715724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:17.954  [2024-12-16 11:32:43.770006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:17.954  [2024-12-16 11:32:43.815007] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:17.954  [2024-12-16 11:32:43.815139] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:18.520   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:18.520   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:11:18.520   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:11:18.520   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:18.520   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:11:18.520   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:11:18.520   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.521  malloc1
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.521  [2024-12-16 11:32:44.514706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:11:18.521  [2024-12-16 11:32:44.514846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:18.521  [2024-12-16 11:32:44.514886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:11:18.521  [2024-12-16 11:32:44.514936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:18.521  [2024-12-16 11:32:44.517315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:18.521  [2024-12-16 11:32:44.517413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:11:18.521  pt1
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.521  malloc2
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.521  [2024-12-16 11:32:44.554517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:11:18.521  [2024-12-16 11:32:44.554594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:18.521  [2024-12-16 11:32:44.554613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:11:18.521  [2024-12-16 11:32:44.554625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:18.521  [2024-12-16 11:32:44.557107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:18.521  [2024-12-16 11:32:44.557151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:11:18.521  pt2
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.521  malloc3
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.521   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.521  [2024-12-16 11:32:44.584319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:11:18.521  [2024-12-16 11:32:44.584455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:18.521  [2024-12-16 11:32:44.584495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:11:18.521  [2024-12-16 11:32:44.584530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:18.780  [2024-12-16 11:32:44.586876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:18.780  [2024-12-16 11:32:44.586963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:11:18.780  pt3
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.780  malloc4
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.780  [2024-12-16 11:32:44.617616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:11:18.780  [2024-12-16 11:32:44.617734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:18.780  [2024-12-16 11:32:44.617775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:11:18.780  [2024-12-16 11:32:44.617818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:18.780  [2024-12-16 11:32:44.620337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:18.780  [2024-12-16 11:32:44.620427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:11:18.780  pt4
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.780  [2024-12-16 11:32:44.629686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:11:18.780  [2024-12-16 11:32:44.631902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:11:18.780  [2024-12-16 11:32:44.632022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:11:18.780  [2024-12-16 11:32:44.632124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:11:18.780  [2024-12-16 11:32:44.632357] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:11:18.780  [2024-12-16 11:32:44.632415] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:18.780  [2024-12-16 11:32:44.632753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:18.780  [2024-12-16 11:32:44.632988] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:11:18.780  [2024-12-16 11:32:44.633043] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:11:18.780  [2024-12-16 11:32:44.633206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:18.780    11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:18.780    11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:18.780    11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:18.780    11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:18.780    11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:18.780   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:18.780    "name": "raid_bdev1",
00:11:18.780    "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:18.780    "strip_size_kb": 64,
00:11:18.780    "state": "online",
00:11:18.780    "raid_level": "raid0",
00:11:18.780    "superblock": true,
00:11:18.780    "num_base_bdevs": 4,
00:11:18.780    "num_base_bdevs_discovered": 4,
00:11:18.780    "num_base_bdevs_operational": 4,
00:11:18.780    "base_bdevs_list": [
00:11:18.780      {
00:11:18.780        "name": "pt1",
00:11:18.780        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:18.780        "is_configured": true,
00:11:18.780        "data_offset": 2048,
00:11:18.780        "data_size": 63488
00:11:18.780      },
00:11:18.780      {
00:11:18.780        "name": "pt2",
00:11:18.780        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:18.780        "is_configured": true,
00:11:18.780        "data_offset": 2048,
00:11:18.780        "data_size": 63488
00:11:18.780      },
00:11:18.780      {
00:11:18.780        "name": "pt3",
00:11:18.780        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:18.780        "is_configured": true,
00:11:18.780        "data_offset": 2048,
00:11:18.780        "data_size": 63488
00:11:18.781      },
00:11:18.781      {
00:11:18.781        "name": "pt4",
00:11:18.781        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:18.781        "is_configured": true,
00:11:18.781        "data_offset": 2048,
00:11:18.781        "data_size": 63488
00:11:18.781      }
00:11:18.781    ]
00:11:18.781  }'
00:11:18.781   11:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:18.781   11:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.043   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:11:19.043   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:11:19.043   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:19.043   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:19.043   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:19.043   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.313  [2024-12-16 11:32:45.113223] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:19.313    "name": "raid_bdev1",
00:11:19.313    "aliases": [
00:11:19.313      "3b593519-faa5-46bd-9b5d-af42ecb88e04"
00:11:19.313    ],
00:11:19.313    "product_name": "Raid Volume",
00:11:19.313    "block_size": 512,
00:11:19.313    "num_blocks": 253952,
00:11:19.313    "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:19.313    "assigned_rate_limits": {
00:11:19.313      "rw_ios_per_sec": 0,
00:11:19.313      "rw_mbytes_per_sec": 0,
00:11:19.313      "r_mbytes_per_sec": 0,
00:11:19.313      "w_mbytes_per_sec": 0
00:11:19.313    },
00:11:19.313    "claimed": false,
00:11:19.313    "zoned": false,
00:11:19.313    "supported_io_types": {
00:11:19.313      "read": true,
00:11:19.313      "write": true,
00:11:19.313      "unmap": true,
00:11:19.313      "flush": true,
00:11:19.313      "reset": true,
00:11:19.313      "nvme_admin": false,
00:11:19.313      "nvme_io": false,
00:11:19.313      "nvme_io_md": false,
00:11:19.313      "write_zeroes": true,
00:11:19.313      "zcopy": false,
00:11:19.313      "get_zone_info": false,
00:11:19.313      "zone_management": false,
00:11:19.313      "zone_append": false,
00:11:19.313      "compare": false,
00:11:19.313      "compare_and_write": false,
00:11:19.313      "abort": false,
00:11:19.313      "seek_hole": false,
00:11:19.313      "seek_data": false,
00:11:19.313      "copy": false,
00:11:19.313      "nvme_iov_md": false
00:11:19.313    },
00:11:19.313    "memory_domains": [
00:11:19.313      {
00:11:19.313        "dma_device_id": "system",
00:11:19.313        "dma_device_type": 1
00:11:19.313      },
00:11:19.313      {
00:11:19.313        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:19.313        "dma_device_type": 2
00:11:19.313      },
00:11:19.313      {
00:11:19.313        "dma_device_id": "system",
00:11:19.313        "dma_device_type": 1
00:11:19.313      },
00:11:19.313      {
00:11:19.313        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:19.313        "dma_device_type": 2
00:11:19.313      },
00:11:19.313      {
00:11:19.313        "dma_device_id": "system",
00:11:19.313        "dma_device_type": 1
00:11:19.313      },
00:11:19.313      {
00:11:19.313        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:19.313        "dma_device_type": 2
00:11:19.313      },
00:11:19.313      {
00:11:19.313        "dma_device_id": "system",
00:11:19.313        "dma_device_type": 1
00:11:19.313      },
00:11:19.313      {
00:11:19.313        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:19.313        "dma_device_type": 2
00:11:19.313      }
00:11:19.313    ],
00:11:19.313    "driver_specific": {
00:11:19.313      "raid": {
00:11:19.313        "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:19.313        "strip_size_kb": 64,
00:11:19.313        "state": "online",
00:11:19.313        "raid_level": "raid0",
00:11:19.313        "superblock": true,
00:11:19.313        "num_base_bdevs": 4,
00:11:19.313        "num_base_bdevs_discovered": 4,
00:11:19.313        "num_base_bdevs_operational": 4,
00:11:19.313        "base_bdevs_list": [
00:11:19.313          {
00:11:19.313            "name": "pt1",
00:11:19.313            "uuid": "00000000-0000-0000-0000-000000000001",
00:11:19.313            "is_configured": true,
00:11:19.313            "data_offset": 2048,
00:11:19.313            "data_size": 63488
00:11:19.313          },
00:11:19.313          {
00:11:19.313            "name": "pt2",
00:11:19.313            "uuid": "00000000-0000-0000-0000-000000000002",
00:11:19.313            "is_configured": true,
00:11:19.313            "data_offset": 2048,
00:11:19.313            "data_size": 63488
00:11:19.313          },
00:11:19.313          {
00:11:19.313            "name": "pt3",
00:11:19.313            "uuid": "00000000-0000-0000-0000-000000000003",
00:11:19.313            "is_configured": true,
00:11:19.313            "data_offset": 2048,
00:11:19.313            "data_size": 63488
00:11:19.313          },
00:11:19.313          {
00:11:19.313            "name": "pt4",
00:11:19.313            "uuid": "00000000-0000-0000-0000-000000000004",
00:11:19.313            "is_configured": true,
00:11:19.313            "data_offset": 2048,
00:11:19.313            "data_size": 63488
00:11:19.313          }
00:11:19.313        ]
00:11:19.313      }
00:11:19.313    }
00:11:19.313  }'
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:11:19.313  pt2
00:11:19.313  pt3
00:11:19.313  pt4'
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:19.313   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:19.313    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.572  [2024-12-16 11:32:45.472728] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3b593519-faa5-46bd-9b5d-af42ecb88e04
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3b593519-faa5-46bd-9b5d-af42ecb88e04 ']'
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.572  [2024-12-16 11:32:45.516245] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:19.572  [2024-12-16 11:32:45.516285] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:19.572  [2024-12-16 11:32:45.516387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:19.572  [2024-12-16 11:32:45.516476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:19.572  [2024-12-16 11:32:45.516495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:11:19.572    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.572   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.573   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:19.573   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4
00:11:19.573   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.573   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.573   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.573    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:11:19.573    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.573    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.573    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.832  [2024-12-16 11:32:45.683986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:11:19.832  [2024-12-16 11:32:45.686180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:11:19.832  [2024-12-16 11:32:45.686304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:11:19.832  [2024-12-16 11:32:45.686346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:11:19.832  [2024-12-16 11:32:45.686400] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:11:19.832  [2024-12-16 11:32:45.686465] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:11:19.832  [2024-12-16 11:32:45.686492] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:11:19.832  [2024-12-16 11:32:45.686512] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4
00:11:19.832  [2024-12-16 11:32:45.686530] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:19.832  [2024-12-16 11:32:45.686563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:11:19.832  request:
00:11:19.832  {
00:11:19.832  "name": "raid_bdev1",
00:11:19.832  "raid_level": "raid0",
00:11:19.832  "base_bdevs": [
00:11:19.832  "malloc1",
00:11:19.832  "malloc2",
00:11:19.832  "malloc3",
00:11:19.832  "malloc4"
00:11:19.832  ],
00:11:19.832  "strip_size_kb": 64,
00:11:19.832  "superblock": false,
00:11:19.832  "method": "bdev_raid_create",
00:11:19.832  "req_id": 1
00:11:19.832  }
00:11:19.832  Got JSON-RPC error response
00:11:19.832  response:
00:11:19.832  {
00:11:19.832  "code": -17,
00:11:19.832  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:11:19.832  }
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.832  [2024-12-16 11:32:45.743829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:11:19.832  [2024-12-16 11:32:45.743941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:19.832  [2024-12-16 11:32:45.743996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:11:19.832  [2024-12-16 11:32:45.744030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:19.832  [2024-12-16 11:32:45.746548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:19.832  [2024-12-16 11:32:45.746627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:11:19.832  [2024-12-16 11:32:45.746745] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:11:19.832  [2024-12-16 11:32:45.746845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:11:19.832  pt1
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:19.832    11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:19.832    "name": "raid_bdev1",
00:11:19.832    "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:19.832    "strip_size_kb": 64,
00:11:19.832    "state": "configuring",
00:11:19.832    "raid_level": "raid0",
00:11:19.832    "superblock": true,
00:11:19.832    "num_base_bdevs": 4,
00:11:19.832    "num_base_bdevs_discovered": 1,
00:11:19.832    "num_base_bdevs_operational": 4,
00:11:19.832    "base_bdevs_list": [
00:11:19.832      {
00:11:19.832        "name": "pt1",
00:11:19.832        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:19.832        "is_configured": true,
00:11:19.832        "data_offset": 2048,
00:11:19.832        "data_size": 63488
00:11:19.832      },
00:11:19.832      {
00:11:19.832        "name": null,
00:11:19.832        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:19.832        "is_configured": false,
00:11:19.832        "data_offset": 2048,
00:11:19.832        "data_size": 63488
00:11:19.832      },
00:11:19.832      {
00:11:19.832        "name": null,
00:11:19.832        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:19.832        "is_configured": false,
00:11:19.832        "data_offset": 2048,
00:11:19.832        "data_size": 63488
00:11:19.832      },
00:11:19.832      {
00:11:19.832        "name": null,
00:11:19.832        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:19.832        "is_configured": false,
00:11:19.832        "data_offset": 2048,
00:11:19.832        "data_size": 63488
00:11:19.832      }
00:11:19.832    ]
00:11:19.832  }'
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:19.832   11:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.399   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']'
00:11:20.399   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:11:20.399   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.399   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.399  [2024-12-16 11:32:46.227120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:11:20.399  [2024-12-16 11:32:46.227192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:20.399  [2024-12-16 11:32:46.227219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:11:20.399  [2024-12-16 11:32:46.227237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:20.399  [2024-12-16 11:32:46.227720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:20.400  [2024-12-16 11:32:46.227743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:11:20.400  [2024-12-16 11:32:46.227829] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:11:20.400  [2024-12-16 11:32:46.227854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:11:20.400  pt2
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.400  [2024-12-16 11:32:46.239085] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:20.400    11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:20.400    11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:20.400    11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.400    11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.400    11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:20.400    "name": "raid_bdev1",
00:11:20.400    "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:20.400    "strip_size_kb": 64,
00:11:20.400    "state": "configuring",
00:11:20.400    "raid_level": "raid0",
00:11:20.400    "superblock": true,
00:11:20.400    "num_base_bdevs": 4,
00:11:20.400    "num_base_bdevs_discovered": 1,
00:11:20.400    "num_base_bdevs_operational": 4,
00:11:20.400    "base_bdevs_list": [
00:11:20.400      {
00:11:20.400        "name": "pt1",
00:11:20.400        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:20.400        "is_configured": true,
00:11:20.400        "data_offset": 2048,
00:11:20.400        "data_size": 63488
00:11:20.400      },
00:11:20.400      {
00:11:20.400        "name": null,
00:11:20.400        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:20.400        "is_configured": false,
00:11:20.400        "data_offset": 0,
00:11:20.400        "data_size": 63488
00:11:20.400      },
00:11:20.400      {
00:11:20.400        "name": null,
00:11:20.400        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:20.400        "is_configured": false,
00:11:20.400        "data_offset": 2048,
00:11:20.400        "data_size": 63488
00:11:20.400      },
00:11:20.400      {
00:11:20.400        "name": null,
00:11:20.400        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:20.400        "is_configured": false,
00:11:20.400        "data_offset": 2048,
00:11:20.400        "data_size": 63488
00:11:20.400      }
00:11:20.400    ]
00:11:20.400  }'
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:20.400   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.659  [2024-12-16 11:32:46.634460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:11:20.659  [2024-12-16 11:32:46.634617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:20.659  [2024-12-16 11:32:46.634672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:11:20.659  [2024-12-16 11:32:46.634714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:20.659  [2024-12-16 11:32:46.635212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:20.659  [2024-12-16 11:32:46.635293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:11:20.659  [2024-12-16 11:32:46.635418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:11:20.659  [2024-12-16 11:32:46.635479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:11:20.659  pt2
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.659  [2024-12-16 11:32:46.646378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:11:20.659  [2024-12-16 11:32:46.646447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:20.659  [2024-12-16 11:32:46.646479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:11:20.659  [2024-12-16 11:32:46.646491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:20.659  [2024-12-16 11:32:46.646955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:20.659  [2024-12-16 11:32:46.646985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:11:20.659  [2024-12-16 11:32:46.647057] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:11:20.659  [2024-12-16 11:32:46.647081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:11:20.659  pt3
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.659  [2024-12-16 11:32:46.658363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:11:20.659  [2024-12-16 11:32:46.658428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:20.659  [2024-12-16 11:32:46.658448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:11:20.659  [2024-12-16 11:32:46.658459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:20.659  [2024-12-16 11:32:46.658863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:20.659  [2024-12-16 11:32:46.658890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:11:20.659  [2024-12-16 11:32:46.658972] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:11:20.659  [2024-12-16 11:32:46.658995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:11:20.659  [2024-12-16 11:32:46.659111] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:11:20.659  [2024-12-16 11:32:46.659127] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:20.659  [2024-12-16 11:32:46.659402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:11:20.659  [2024-12-16 11:32:46.659552] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:11:20.659  [2024-12-16 11:32:46.659565] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:11:20.659  [2024-12-16 11:32:46.659681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:20.659  pt4
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:20.659    11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:20.659    11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:20.659    11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.659    11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:20.659    11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:20.659    "name": "raid_bdev1",
00:11:20.659    "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:20.659    "strip_size_kb": 64,
00:11:20.659    "state": "online",
00:11:20.659    "raid_level": "raid0",
00:11:20.659    "superblock": true,
00:11:20.659    "num_base_bdevs": 4,
00:11:20.659    "num_base_bdevs_discovered": 4,
00:11:20.659    "num_base_bdevs_operational": 4,
00:11:20.659    "base_bdevs_list": [
00:11:20.659      {
00:11:20.659        "name": "pt1",
00:11:20.659        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:20.659        "is_configured": true,
00:11:20.659        "data_offset": 2048,
00:11:20.659        "data_size": 63488
00:11:20.659      },
00:11:20.659      {
00:11:20.659        "name": "pt2",
00:11:20.659        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:20.659        "is_configured": true,
00:11:20.659        "data_offset": 2048,
00:11:20.659        "data_size": 63488
00:11:20.659      },
00:11:20.659      {
00:11:20.659        "name": "pt3",
00:11:20.659        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:20.659        "is_configured": true,
00:11:20.659        "data_offset": 2048,
00:11:20.659        "data_size": 63488
00:11:20.659      },
00:11:20.659      {
00:11:20.659        "name": "pt4",
00:11:20.659        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:20.659        "is_configured": true,
00:11:20.659        "data_offset": 2048,
00:11:20.659        "data_size": 63488
00:11:20.659      }
00:11:20.659    ]
00:11:20.659  }'
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:20.659   11:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:21.228    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:21.228    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:21.228    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:21.228    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:21.228  [2024-12-16 11:32:47.153995] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:21.228    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:21.228    "name": "raid_bdev1",
00:11:21.228    "aliases": [
00:11:21.228      "3b593519-faa5-46bd-9b5d-af42ecb88e04"
00:11:21.228    ],
00:11:21.228    "product_name": "Raid Volume",
00:11:21.228    "block_size": 512,
00:11:21.228    "num_blocks": 253952,
00:11:21.228    "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:21.228    "assigned_rate_limits": {
00:11:21.228      "rw_ios_per_sec": 0,
00:11:21.228      "rw_mbytes_per_sec": 0,
00:11:21.228      "r_mbytes_per_sec": 0,
00:11:21.228      "w_mbytes_per_sec": 0
00:11:21.228    },
00:11:21.228    "claimed": false,
00:11:21.228    "zoned": false,
00:11:21.228    "supported_io_types": {
00:11:21.228      "read": true,
00:11:21.228      "write": true,
00:11:21.228      "unmap": true,
00:11:21.228      "flush": true,
00:11:21.228      "reset": true,
00:11:21.228      "nvme_admin": false,
00:11:21.228      "nvme_io": false,
00:11:21.228      "nvme_io_md": false,
00:11:21.228      "write_zeroes": true,
00:11:21.228      "zcopy": false,
00:11:21.228      "get_zone_info": false,
00:11:21.228      "zone_management": false,
00:11:21.228      "zone_append": false,
00:11:21.228      "compare": false,
00:11:21.228      "compare_and_write": false,
00:11:21.228      "abort": false,
00:11:21.228      "seek_hole": false,
00:11:21.228      "seek_data": false,
00:11:21.228      "copy": false,
00:11:21.228      "nvme_iov_md": false
00:11:21.228    },
00:11:21.228    "memory_domains": [
00:11:21.228      {
00:11:21.228        "dma_device_id": "system",
00:11:21.228        "dma_device_type": 1
00:11:21.228      },
00:11:21.228      {
00:11:21.228        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:21.228        "dma_device_type": 2
00:11:21.228      },
00:11:21.228      {
00:11:21.228        "dma_device_id": "system",
00:11:21.228        "dma_device_type": 1
00:11:21.228      },
00:11:21.228      {
00:11:21.228        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:21.228        "dma_device_type": 2
00:11:21.228      },
00:11:21.228      {
00:11:21.228        "dma_device_id": "system",
00:11:21.228        "dma_device_type": 1
00:11:21.228      },
00:11:21.228      {
00:11:21.228        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:21.228        "dma_device_type": 2
00:11:21.228      },
00:11:21.228      {
00:11:21.228        "dma_device_id": "system",
00:11:21.228        "dma_device_type": 1
00:11:21.228      },
00:11:21.228      {
00:11:21.228        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:21.228        "dma_device_type": 2
00:11:21.228      }
00:11:21.228    ],
00:11:21.228    "driver_specific": {
00:11:21.228      "raid": {
00:11:21.228        "uuid": "3b593519-faa5-46bd-9b5d-af42ecb88e04",
00:11:21.228        "strip_size_kb": 64,
00:11:21.228        "state": "online",
00:11:21.228        "raid_level": "raid0",
00:11:21.228        "superblock": true,
00:11:21.228        "num_base_bdevs": 4,
00:11:21.228        "num_base_bdevs_discovered": 4,
00:11:21.228        "num_base_bdevs_operational": 4,
00:11:21.228        "base_bdevs_list": [
00:11:21.228          {
00:11:21.228            "name": "pt1",
00:11:21.228            "uuid": "00000000-0000-0000-0000-000000000001",
00:11:21.228            "is_configured": true,
00:11:21.228            "data_offset": 2048,
00:11:21.228            "data_size": 63488
00:11:21.228          },
00:11:21.228          {
00:11:21.228            "name": "pt2",
00:11:21.228            "uuid": "00000000-0000-0000-0000-000000000002",
00:11:21.228            "is_configured": true,
00:11:21.228            "data_offset": 2048,
00:11:21.228            "data_size": 63488
00:11:21.228          },
00:11:21.228          {
00:11:21.228            "name": "pt3",
00:11:21.228            "uuid": "00000000-0000-0000-0000-000000000003",
00:11:21.228            "is_configured": true,
00:11:21.228            "data_offset": 2048,
00:11:21.228            "data_size": 63488
00:11:21.228          },
00:11:21.228          {
00:11:21.228            "name": "pt4",
00:11:21.228            "uuid": "00000000-0000-0000-0000-000000000004",
00:11:21.228            "is_configured": true,
00:11:21.228            "data_offset": 2048,
00:11:21.228            "data_size": 63488
00:11:21.228          }
00:11:21.228        ]
00:11:21.228      }
00:11:21.228    }
00:11:21.228  }'
00:11:21.228    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:21.228   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:11:21.228  pt2
00:11:21.228  pt3
00:11:21.228  pt4'
00:11:21.228    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:21.487   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:21.487    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:21.488   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:21.488   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:21.488    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:21.488    11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:11:21.488    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:21.488    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:21.488  [2024-12-16 11:32:47.533319] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:21.745    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3b593519-faa5-46bd-9b5d-af42ecb88e04 '!=' 3b593519-faa5-46bd-9b5d-af42ecb88e04 ']'
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81980
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81980 ']'
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81980
00:11:21.745    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:21.745    11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81980
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81980'
00:11:21.745  killing process with pid 81980
00:11:21.745   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81980
00:11:21.746  [2024-12-16 11:32:47.615757] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:21.746   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81980
00:11:21.746  [2024-12-16 11:32:47.615948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:21.746  [2024-12-16 11:32:47.616068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:21.746  [2024-12-16 11:32:47.616124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:11:21.746  [2024-12-16 11:32:47.663809] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:22.004   11:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:11:22.004  
00:11:22.004  real	0m4.467s
00:11:22.004  user	0m7.079s
00:11:22.004  sys	0m0.996s
00:11:22.004   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:22.004   11:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:22.004  ************************************
00:11:22.004  END TEST raid_superblock_test
00:11:22.004  ************************************
00:11:22.004   11:32:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read
00:11:22.004   11:32:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:22.004   11:32:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:22.004   11:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:22.004  ************************************
00:11:22.004  START TEST raid_read_error_test
00:11:22.004  ************************************
00:11:22.004   11:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read
00:11:22.004   11:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0
00:11:22.004   11:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4
00:11:22.004   11:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:11:22.004    11:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:22.004    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']'
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:11:22.005    11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CL17CtDIw2
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82228
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82228
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82228 ']'
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:22.005  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:22.005   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:22.263  [2024-12-16 11:32:48.110477] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:22.263  [2024-12-16 11:32:48.110628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82228 ]
00:11:22.263  [2024-12-16 11:32:48.276371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:22.263  [2024-12-16 11:32:48.327835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:22.522  [2024-12-16 11:32:48.371449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:22.522  [2024-12-16 11:32:48.371488] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  BaseBdev1_malloc
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:11:23.090   11:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  true
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  [2024-12-16 11:32:49.018407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:11:23.090  [2024-12-16 11:32:49.018469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:23.090  [2024-12-16 11:32:49.018502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:11:23.090  [2024-12-16 11:32:49.018517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:23.090  [2024-12-16 11:32:49.020758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:23.090  [2024-12-16 11:32:49.020798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:11:23.090  BaseBdev1
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  BaseBdev2_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  true
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  [2024-12-16 11:32:49.069151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:11:23.090  [2024-12-16 11:32:49.069248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:23.090  [2024-12-16 11:32:49.069276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:11:23.090  [2024-12-16 11:32:49.069287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:23.090  [2024-12-16 11:32:49.071369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:23.090  [2024-12-16 11:32:49.071409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:11:23.090  BaseBdev2
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  BaseBdev3_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  true
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  [2024-12-16 11:32:49.109942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:11:23.090  [2024-12-16 11:32:49.109996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:23.090  [2024-12-16 11:32:49.110021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:11:23.090  [2024-12-16 11:32:49.110032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:23.090  [2024-12-16 11:32:49.112127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:23.090  [2024-12-16 11:32:49.112166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:11:23.090  BaseBdev3
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  BaseBdev4_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  true
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.090   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.090  [2024-12-16 11:32:49.150423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc
00:11:23.090  [2024-12-16 11:32:49.150475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:23.090  [2024-12-16 11:32:49.150504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:11:23.090  [2024-12-16 11:32:49.150515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:23.091  [2024-12-16 11:32:49.152631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:23.091  [2024-12-16 11:32:49.152726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:11:23.350  BaseBdev4
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.350  [2024-12-16 11:32:49.162459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:23.350  [2024-12-16 11:32:49.164409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:23.350  [2024-12-16 11:32:49.164573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:23.350  [2024-12-16 11:32:49.164644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:23.350  [2024-12-16 11:32:49.164868] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080
00:11:23.350  [2024-12-16 11:32:49.164882] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:23.350  [2024-12-16 11:32:49.165160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:23.350  [2024-12-16 11:32:49.165322] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080
00:11:23.350  [2024-12-16 11:32:49.165334] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080
00:11:23.350  [2024-12-16 11:32:49.165479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:23.350    11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:23.350    11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:23.350    11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:23.350    11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.350    11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:23.350    "name": "raid_bdev1",
00:11:23.350    "uuid": "a2ff715d-3f0a-4f23-a20a-bdf0e3b243ea",
00:11:23.350    "strip_size_kb": 64,
00:11:23.350    "state": "online",
00:11:23.350    "raid_level": "raid0",
00:11:23.350    "superblock": true,
00:11:23.350    "num_base_bdevs": 4,
00:11:23.350    "num_base_bdevs_discovered": 4,
00:11:23.350    "num_base_bdevs_operational": 4,
00:11:23.350    "base_bdevs_list": [
00:11:23.350      {
00:11:23.350        "name": "BaseBdev1",
00:11:23.350        "uuid": "4997b4b3-6e7a-5f4e-b589-fcd7f50449f6",
00:11:23.350        "is_configured": true,
00:11:23.350        "data_offset": 2048,
00:11:23.350        "data_size": 63488
00:11:23.350      },
00:11:23.350      {
00:11:23.350        "name": "BaseBdev2",
00:11:23.350        "uuid": "1ffbb3d1-4d50-58ac-bac8-2d56523b7ebc",
00:11:23.350        "is_configured": true,
00:11:23.350        "data_offset": 2048,
00:11:23.350        "data_size": 63488
00:11:23.350      },
00:11:23.350      {
00:11:23.350        "name": "BaseBdev3",
00:11:23.350        "uuid": "3788ddfc-7234-56af-8c57-80273db8eb7c",
00:11:23.350        "is_configured": true,
00:11:23.350        "data_offset": 2048,
00:11:23.350        "data_size": 63488
00:11:23.350      },
00:11:23.350      {
00:11:23.350        "name": "BaseBdev4",
00:11:23.350        "uuid": "402a93ce-544b-5451-a2cf-83bbaea97d67",
00:11:23.350        "is_configured": true,
00:11:23.350        "data_offset": 2048,
00:11:23.350        "data_size": 63488
00:11:23.350      }
00:11:23.350    ]
00:11:23.350  }'
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:23.350   11:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:23.610   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:23.610   11:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:11:23.870  [2024-12-16 11:32:49.737896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:11:24.804   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:11:24.804   11:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:24.804   11:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:24.804   11:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]]
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:24.805    11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:24.805    11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:24.805    11:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:24.805    11:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:24.805    11:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:24.805    "name": "raid_bdev1",
00:11:24.805    "uuid": "a2ff715d-3f0a-4f23-a20a-bdf0e3b243ea",
00:11:24.805    "strip_size_kb": 64,
00:11:24.805    "state": "online",
00:11:24.805    "raid_level": "raid0",
00:11:24.805    "superblock": true,
00:11:24.805    "num_base_bdevs": 4,
00:11:24.805    "num_base_bdevs_discovered": 4,
00:11:24.805    "num_base_bdevs_operational": 4,
00:11:24.805    "base_bdevs_list": [
00:11:24.805      {
00:11:24.805        "name": "BaseBdev1",
00:11:24.805        "uuid": "4997b4b3-6e7a-5f4e-b589-fcd7f50449f6",
00:11:24.805        "is_configured": true,
00:11:24.805        "data_offset": 2048,
00:11:24.805        "data_size": 63488
00:11:24.805      },
00:11:24.805      {
00:11:24.805        "name": "BaseBdev2",
00:11:24.805        "uuid": "1ffbb3d1-4d50-58ac-bac8-2d56523b7ebc",
00:11:24.805        "is_configured": true,
00:11:24.805        "data_offset": 2048,
00:11:24.805        "data_size": 63488
00:11:24.805      },
00:11:24.805      {
00:11:24.805        "name": "BaseBdev3",
00:11:24.805        "uuid": "3788ddfc-7234-56af-8c57-80273db8eb7c",
00:11:24.805        "is_configured": true,
00:11:24.805        "data_offset": 2048,
00:11:24.805        "data_size": 63488
00:11:24.805      },
00:11:24.805      {
00:11:24.805        "name": "BaseBdev4",
00:11:24.805        "uuid": "402a93ce-544b-5451-a2cf-83bbaea97d67",
00:11:24.805        "is_configured": true,
00:11:24.805        "data_offset": 2048,
00:11:24.805        "data_size": 63488
00:11:24.805      }
00:11:24.805    ]
00:11:24.805  }'
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:24.805   11:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:25.064   11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:11:25.064   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:25.064   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:25.323  [2024-12-16 11:32:51.131768] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:25.323  [2024-12-16 11:32:51.131886] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:25.323  [2024-12-16 11:32:51.135118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:25.323  [2024-12-16 11:32:51.135264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:25.323  [2024-12-16 11:32:51.135343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:25.323  [2024-12-16 11:32:51.135414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline
00:11:25.323  {
00:11:25.323    "results": [
00:11:25.323      {
00:11:25.323        "job": "raid_bdev1",
00:11:25.323        "core_mask": "0x1",
00:11:25.323        "workload": "randrw",
00:11:25.323        "percentage": 50,
00:11:25.323        "status": "finished",
00:11:25.323        "queue_depth": 1,
00:11:25.323        "io_size": 131072,
00:11:25.323        "runtime": 1.39474,
00:11:25.323        "iops": 13750.233018340336,
00:11:25.323        "mibps": 1718.779127292542,
00:11:25.323        "io_failed": 1,
00:11:25.323        "io_timeout": 0,
00:11:25.323        "avg_latency_us": 100.61584188127891,
00:11:25.323        "min_latency_us": 26.941484716157206,
00:11:25.323        "max_latency_us": 1760.0279475982534
00:11:25.323      }
00:11:25.323    ],
00:11:25.323    "core_count": 1
00:11:25.323  }
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82228
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82228 ']'
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82228
00:11:25.323    11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:25.323    11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82228
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:25.323  killing process with pid 82228
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82228'
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82228
00:11:25.323  [2024-12-16 11:32:51.173290] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:25.323   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82228
00:11:25.323  [2024-12-16 11:32:51.211851] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:25.582    11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CL17CtDIw2
00:11:25.582    11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:11:25.582    11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:11:25.582  ************************************
00:11:25.582  END TEST raid_read_error_test
00:11:25.582  ************************************
00:11:25.582   11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72
00:11:25.582   11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0
00:11:25.582   11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:25.582   11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:25.582   11:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]]
00:11:25.582  
00:11:25.582  real	0m3.475s
00:11:25.582  user	0m4.421s
00:11:25.582  sys	0m0.547s
00:11:25.582   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:25.582   11:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:25.582   11:32:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write
00:11:25.582   11:32:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:25.582   11:32:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:25.582   11:32:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:25.582  ************************************
00:11:25.582  START TEST raid_write_error_test
00:11:25.582  ************************************
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:25.582    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:11:25.582   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']'
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:11:25.583    11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IQZT79KReC
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82363
00:11:25.583  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82363
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82363 ']'
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:25.583   11:32:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:11:25.583  [2024-12-16 11:32:51.638635] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:25.583  [2024-12-16 11:32:51.638788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82363 ]
00:11:25.841  [2024-12-16 11:32:51.804865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:25.841  [2024-12-16 11:32:51.859746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:25.841  [2024-12-16 11:32:51.906111] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:25.841  [2024-12-16 11:32:51.906151] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  BaseBdev1_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  true
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  [2024-12-16 11:32:52.606495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:11:26.779  [2024-12-16 11:32:52.606583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:26.779  [2024-12-16 11:32:52.606615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:11:26.779  [2024-12-16 11:32:52.606629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:26.779  [2024-12-16 11:32:52.609223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:26.779  [2024-12-16 11:32:52.609269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:11:26.779  BaseBdev1
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  BaseBdev2_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  true
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  [2024-12-16 11:32:52.655296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:11:26.779  [2024-12-16 11:32:52.655420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:26.779  [2024-12-16 11:32:52.655455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:11:26.779  [2024-12-16 11:32:52.655469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:26.779  [2024-12-16 11:32:52.658060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:26.779  [2024-12-16 11:32:52.658103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:11:26.779  BaseBdev2
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  BaseBdev3_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.779  true
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.779   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.780  [2024-12-16 11:32:52.696770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:11:26.780  [2024-12-16 11:32:52.696928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:26.780  [2024-12-16 11:32:52.696983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:11:26.780  [2024-12-16 11:32:52.696998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:26.780  [2024-12-16 11:32:52.699511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:26.780  [2024-12-16 11:32:52.699564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:11:26.780  BaseBdev3
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.780  BaseBdev4_malloc
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.780  true
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.780  [2024-12-16 11:32:52.738062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc
00:11:26.780  [2024-12-16 11:32:52.738122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:26.780  [2024-12-16 11:32:52.738157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:11:26.780  [2024-12-16 11:32:52.738171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:26.780  [2024-12-16 11:32:52.740627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:26.780  [2024-12-16 11:32:52.740671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:11:26.780  BaseBdev4
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.780  [2024-12-16 11:32:52.750104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:26.780  [2024-12-16 11:32:52.752256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:26.780  [2024-12-16 11:32:52.752429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:26.780  [2024-12-16 11:32:52.752503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:26.780  [2024-12-16 11:32:52.752744] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080
00:11:26.780  [2024-12-16 11:32:52.752759] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:26.780  [2024-12-16 11:32:52.753051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:26.780  [2024-12-16 11:32:52.753212] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080
00:11:26.780  [2024-12-16 11:32:52.753226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080
00:11:26.780  [2024-12-16 11:32:52.753376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:26.780    11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:26.780    11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:26.780    11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:26.780    11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:26.780    11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:26.780    "name": "raid_bdev1",
00:11:26.780    "uuid": "2ee43c5b-5069-4148-882b-1e4e378b254c",
00:11:26.780    "strip_size_kb": 64,
00:11:26.780    "state": "online",
00:11:26.780    "raid_level": "raid0",
00:11:26.780    "superblock": true,
00:11:26.780    "num_base_bdevs": 4,
00:11:26.780    "num_base_bdevs_discovered": 4,
00:11:26.780    "num_base_bdevs_operational": 4,
00:11:26.780    "base_bdevs_list": [
00:11:26.780      {
00:11:26.780        "name": "BaseBdev1",
00:11:26.780        "uuid": "a79fb8b7-909a-50a1-865d-32e2c2dc345c",
00:11:26.780        "is_configured": true,
00:11:26.780        "data_offset": 2048,
00:11:26.780        "data_size": 63488
00:11:26.780      },
00:11:26.780      {
00:11:26.780        "name": "BaseBdev2",
00:11:26.780        "uuid": "c3fbb43f-8978-583a-8574-df7c5ac13b1d",
00:11:26.780        "is_configured": true,
00:11:26.780        "data_offset": 2048,
00:11:26.780        "data_size": 63488
00:11:26.780      },
00:11:26.780      {
00:11:26.780        "name": "BaseBdev3",
00:11:26.780        "uuid": "cf4d684c-f185-5c7b-801b-cf30d6dee74c",
00:11:26.780        "is_configured": true,
00:11:26.780        "data_offset": 2048,
00:11:26.780        "data_size": 63488
00:11:26.780      },
00:11:26.780      {
00:11:26.780        "name": "BaseBdev4",
00:11:26.780        "uuid": "061d4d49-05c3-5fbb-9888-e98533a4f896",
00:11:26.780        "is_configured": true,
00:11:26.780        "data_offset": 2048,
00:11:26.780        "data_size": 63488
00:11:26.780      }
00:11:26.780    ]
00:11:26.780  }'
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:26.780   11:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:27.347   11:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:27.347   11:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:11:27.347  [2024-12-16 11:32:53.329512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]]
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:28.284    11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:28.284    11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:28.284    11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:28.284    11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:28.284    11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:28.284    "name": "raid_bdev1",
00:11:28.284    "uuid": "2ee43c5b-5069-4148-882b-1e4e378b254c",
00:11:28.284    "strip_size_kb": 64,
00:11:28.284    "state": "online",
00:11:28.284    "raid_level": "raid0",
00:11:28.284    "superblock": true,
00:11:28.284    "num_base_bdevs": 4,
00:11:28.284    "num_base_bdevs_discovered": 4,
00:11:28.284    "num_base_bdevs_operational": 4,
00:11:28.284    "base_bdevs_list": [
00:11:28.284      {
00:11:28.284        "name": "BaseBdev1",
00:11:28.284        "uuid": "a79fb8b7-909a-50a1-865d-32e2c2dc345c",
00:11:28.284        "is_configured": true,
00:11:28.284        "data_offset": 2048,
00:11:28.284        "data_size": 63488
00:11:28.284      },
00:11:28.284      {
00:11:28.284        "name": "BaseBdev2",
00:11:28.284        "uuid": "c3fbb43f-8978-583a-8574-df7c5ac13b1d",
00:11:28.284        "is_configured": true,
00:11:28.284        "data_offset": 2048,
00:11:28.284        "data_size": 63488
00:11:28.284      },
00:11:28.284      {
00:11:28.284        "name": "BaseBdev3",
00:11:28.284        "uuid": "cf4d684c-f185-5c7b-801b-cf30d6dee74c",
00:11:28.284        "is_configured": true,
00:11:28.284        "data_offset": 2048,
00:11:28.284        "data_size": 63488
00:11:28.284      },
00:11:28.284      {
00:11:28.284        "name": "BaseBdev4",
00:11:28.284        "uuid": "061d4d49-05c3-5fbb-9888-e98533a4f896",
00:11:28.284        "is_configured": true,
00:11:28.284        "data_offset": 2048,
00:11:28.284        "data_size": 63488
00:11:28.284      }
00:11:28.284    ]
00:11:28.284  }'
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:28.284   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:28.853   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:11:28.853   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:28.853   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:28.853  [2024-12-16 11:32:54.702601] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:28.853  [2024-12-16 11:32:54.702639] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:28.853  [2024-12-16 11:32:54.705682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:28.853  [2024-12-16 11:32:54.705744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:28.853  [2024-12-16 11:32:54.705797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:28.853  [2024-12-16 11:32:54.705808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline
00:11:28.853  {
00:11:28.853    "results": [
00:11:28.853      {
00:11:28.853        "job": "raid_bdev1",
00:11:28.853        "core_mask": "0x1",
00:11:28.853        "workload": "randrw",
00:11:28.853        "percentage": 50,
00:11:28.853        "status": "finished",
00:11:28.853        "queue_depth": 1,
00:11:28.853        "io_size": 131072,
00:11:28.853        "runtime": 1.373413,
00:11:28.853        "iops": 13565.475206656702,
00:11:28.853        "mibps": 1695.6844008320877,
00:11:28.853        "io_failed": 1,
00:11:28.853        "io_timeout": 0,
00:11:28.853        "avg_latency_us": 101.96102231030429,
00:11:28.853        "min_latency_us": 27.94759825327511,
00:11:28.853        "max_latency_us": 1738.564192139738
00:11:28.853      }
00:11:28.853    ],
00:11:28.853    "core_count": 1
00:11:28.854  }
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82363
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82363 ']'
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82363
00:11:28.854    11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:28.854    11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82363
00:11:28.854  killing process with pid 82363
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82363'
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82363
00:11:28.854  [2024-12-16 11:32:54.749514] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:28.854   11:32:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82363
00:11:28.854  [2024-12-16 11:32:54.787820] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:29.114    11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IQZT79KReC
00:11:29.114    11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:11:29.114    11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:11:29.114   11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73
00:11:29.114   11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0
00:11:29.114   11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:29.114   11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:29.114   11:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]]
00:11:29.114  
00:11:29.114  real	0m3.514s
00:11:29.114  user	0m4.511s
00:11:29.114  sys	0m0.574s
00:11:29.114  ************************************
00:11:29.114  END TEST raid_write_error_test
00:11:29.114  ************************************
00:11:29.114   11:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:29.114   11:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:29.114   11:32:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:11:29.114   11:32:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false
00:11:29.114   11:32:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:29.114   11:32:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:29.114   11:32:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:29.114  ************************************
00:11:29.114  START TEST raid_state_function_test
00:11:29.114  ************************************
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:29.115    11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']'
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:11:29.115  Process raid pid: 82495
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82495
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82495'
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82495
00:11:29.115  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82495 ']'
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:29.115   11:32:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:29.374  [2024-12-16 11:32:55.214565] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:29.374  [2024-12-16 11:32:55.214706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:29.374  [2024-12-16 11:32:55.376396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:29.374  [2024-12-16 11:32:55.425386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:29.632  [2024-12-16 11:32:55.468995] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:29.632  [2024-12-16 11:32:55.469032] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.200  [2024-12-16 11:32:56.051249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:30.200  [2024-12-16 11:32:56.051310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:30.200  [2024-12-16 11:32:56.051330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:30.200  [2024-12-16 11:32:56.051342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:30.200  [2024-12-16 11:32:56.051349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:30.200  [2024-12-16 11:32:56.051361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:30.200  [2024-12-16 11:32:56.051367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:30.200  [2024-12-16 11:32:56.051376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:30.200    11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:30.200    11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.200    11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.200    11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:30.200    11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:30.200    "name": "Existed_Raid",
00:11:30.200    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.200    "strip_size_kb": 64,
00:11:30.200    "state": "configuring",
00:11:30.200    "raid_level": "concat",
00:11:30.200    "superblock": false,
00:11:30.200    "num_base_bdevs": 4,
00:11:30.200    "num_base_bdevs_discovered": 0,
00:11:30.200    "num_base_bdevs_operational": 4,
00:11:30.200    "base_bdevs_list": [
00:11:30.200      {
00:11:30.200        "name": "BaseBdev1",
00:11:30.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.200        "is_configured": false,
00:11:30.200        "data_offset": 0,
00:11:30.200        "data_size": 0
00:11:30.200      },
00:11:30.200      {
00:11:30.200        "name": "BaseBdev2",
00:11:30.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.200        "is_configured": false,
00:11:30.200        "data_offset": 0,
00:11:30.200        "data_size": 0
00:11:30.200      },
00:11:30.200      {
00:11:30.200        "name": "BaseBdev3",
00:11:30.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.200        "is_configured": false,
00:11:30.200        "data_offset": 0,
00:11:30.200        "data_size": 0
00:11:30.200      },
00:11:30.200      {
00:11:30.200        "name": "BaseBdev4",
00:11:30.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.200        "is_configured": false,
00:11:30.200        "data_offset": 0,
00:11:30.200        "data_size": 0
00:11:30.200      }
00:11:30.200    ]
00:11:30.200  }'
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:30.200   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.459   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:30.459   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.459   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.718  [2024-12-16 11:32:56.526356] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:30.718  [2024-12-16 11:32:56.526500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.718  [2024-12-16 11:32:56.538413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:30.718  [2024-12-16 11:32:56.538527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:30.718  [2024-12-16 11:32:56.538570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:30.718  [2024-12-16 11:32:56.538596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:30.718  [2024-12-16 11:32:56.538614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:30.718  [2024-12-16 11:32:56.538637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:30.718  [2024-12-16 11:32:56.538655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:30.718  [2024-12-16 11:32:56.538676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.718  [2024-12-16 11:32:56.559549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:30.718  BaseBdev1
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.718   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.718  [
00:11:30.718  {
00:11:30.718  "name": "BaseBdev1",
00:11:30.718  "aliases": [
00:11:30.718  "739b9322-0cf8-49aa-92a3-485d9c4663fa"
00:11:30.718  ],
00:11:30.718  "product_name": "Malloc disk",
00:11:30.718  "block_size": 512,
00:11:30.718  "num_blocks": 65536,
00:11:30.718  "uuid": "739b9322-0cf8-49aa-92a3-485d9c4663fa",
00:11:30.718  "assigned_rate_limits": {
00:11:30.718  "rw_ios_per_sec": 0,
00:11:30.718  "rw_mbytes_per_sec": 0,
00:11:30.718  "r_mbytes_per_sec": 0,
00:11:30.718  "w_mbytes_per_sec": 0
00:11:30.718  },
00:11:30.718  "claimed": true,
00:11:30.718  "claim_type": "exclusive_write",
00:11:30.718  "zoned": false,
00:11:30.718  "supported_io_types": {
00:11:30.718  "read": true,
00:11:30.718  "write": true,
00:11:30.718  "unmap": true,
00:11:30.718  "flush": true,
00:11:30.718  "reset": true,
00:11:30.718  "nvme_admin": false,
00:11:30.718  "nvme_io": false,
00:11:30.718  "nvme_io_md": false,
00:11:30.718  "write_zeroes": true,
00:11:30.718  "zcopy": true,
00:11:30.718  "get_zone_info": false,
00:11:30.719  "zone_management": false,
00:11:30.719  "zone_append": false,
00:11:30.719  "compare": false,
00:11:30.719  "compare_and_write": false,
00:11:30.719  "abort": true,
00:11:30.719  "seek_hole": false,
00:11:30.719  "seek_data": false,
00:11:30.719  "copy": true,
00:11:30.719  "nvme_iov_md": false
00:11:30.719  },
00:11:30.719  "memory_domains": [
00:11:30.719  {
00:11:30.719  "dma_device_id": "system",
00:11:30.719  "dma_device_type": 1
00:11:30.719  },
00:11:30.719  {
00:11:30.719  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:30.719  "dma_device_type": 2
00:11:30.719  }
00:11:30.719  ],
00:11:30.719  "driver_specific": {}
00:11:30.719  }
00:11:30.719  ]
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:30.719    11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:30.719    11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:30.719    11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:30.719    11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:30.719    11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:30.719    "name": "Existed_Raid",
00:11:30.719    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.719    "strip_size_kb": 64,
00:11:30.719    "state": "configuring",
00:11:30.719    "raid_level": "concat",
00:11:30.719    "superblock": false,
00:11:30.719    "num_base_bdevs": 4,
00:11:30.719    "num_base_bdevs_discovered": 1,
00:11:30.719    "num_base_bdevs_operational": 4,
00:11:30.719    "base_bdevs_list": [
00:11:30.719      {
00:11:30.719        "name": "BaseBdev1",
00:11:30.719        "uuid": "739b9322-0cf8-49aa-92a3-485d9c4663fa",
00:11:30.719        "is_configured": true,
00:11:30.719        "data_offset": 0,
00:11:30.719        "data_size": 65536
00:11:30.719      },
00:11:30.719      {
00:11:30.719        "name": "BaseBdev2",
00:11:30.719        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.719        "is_configured": false,
00:11:30.719        "data_offset": 0,
00:11:30.719        "data_size": 0
00:11:30.719      },
00:11:30.719      {
00:11:30.719        "name": "BaseBdev3",
00:11:30.719        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.719        "is_configured": false,
00:11:30.719        "data_offset": 0,
00:11:30.719        "data_size": 0
00:11:30.719      },
00:11:30.719      {
00:11:30.719        "name": "BaseBdev4",
00:11:30.719        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:30.719        "is_configured": false,
00:11:30.719        "data_offset": 0,
00:11:30.719        "data_size": 0
00:11:30.719      }
00:11:30.719    ]
00:11:30.719  }'
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:30.719   11:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.286  [2024-12-16 11:32:57.078723] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:31.286  [2024-12-16 11:32:57.078863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.286  [2024-12-16 11:32:57.090740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:31.286  [2024-12-16 11:32:57.092616] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:31.286  [2024-12-16 11:32:57.092657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:31.286  [2024-12-16 11:32:57.092667] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:31.286  [2024-12-16 11:32:57.092675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:31.286  [2024-12-16 11:32:57.092682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:31.286  [2024-12-16 11:32:57.092690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:31.286    11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:31.286    11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:31.286    11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:31.286    11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.286    11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:31.286    "name": "Existed_Raid",
00:11:31.286    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:31.286    "strip_size_kb": 64,
00:11:31.286    "state": "configuring",
00:11:31.286    "raid_level": "concat",
00:11:31.286    "superblock": false,
00:11:31.286    "num_base_bdevs": 4,
00:11:31.286    "num_base_bdevs_discovered": 1,
00:11:31.286    "num_base_bdevs_operational": 4,
00:11:31.286    "base_bdevs_list": [
00:11:31.286      {
00:11:31.286        "name": "BaseBdev1",
00:11:31.286        "uuid": "739b9322-0cf8-49aa-92a3-485d9c4663fa",
00:11:31.286        "is_configured": true,
00:11:31.286        "data_offset": 0,
00:11:31.286        "data_size": 65536
00:11:31.286      },
00:11:31.286      {
00:11:31.286        "name": "BaseBdev2",
00:11:31.286        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:31.286        "is_configured": false,
00:11:31.286        "data_offset": 0,
00:11:31.286        "data_size": 0
00:11:31.286      },
00:11:31.286      {
00:11:31.286        "name": "BaseBdev3",
00:11:31.286        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:31.286        "is_configured": false,
00:11:31.286        "data_offset": 0,
00:11:31.286        "data_size": 0
00:11:31.286      },
00:11:31.286      {
00:11:31.286        "name": "BaseBdev4",
00:11:31.286        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:31.286        "is_configured": false,
00:11:31.286        "data_offset": 0,
00:11:31.286        "data_size": 0
00:11:31.286      }
00:11:31.286    ]
00:11:31.286  }'
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:31.286   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.545  [2024-12-16 11:32:57.536967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:31.545  BaseBdev2
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.545  [
00:11:31.545  {
00:11:31.545  "name": "BaseBdev2",
00:11:31.545  "aliases": [
00:11:31.545  "ac4b0e80-6869-4d76-b376-94a66d9c5042"
00:11:31.545  ],
00:11:31.545  "product_name": "Malloc disk",
00:11:31.545  "block_size": 512,
00:11:31.545  "num_blocks": 65536,
00:11:31.545  "uuid": "ac4b0e80-6869-4d76-b376-94a66d9c5042",
00:11:31.545  "assigned_rate_limits": {
00:11:31.545  "rw_ios_per_sec": 0,
00:11:31.545  "rw_mbytes_per_sec": 0,
00:11:31.545  "r_mbytes_per_sec": 0,
00:11:31.545  "w_mbytes_per_sec": 0
00:11:31.545  },
00:11:31.545  "claimed": true,
00:11:31.545  "claim_type": "exclusive_write",
00:11:31.545  "zoned": false,
00:11:31.545  "supported_io_types": {
00:11:31.545  "read": true,
00:11:31.545  "write": true,
00:11:31.545  "unmap": true,
00:11:31.545  "flush": true,
00:11:31.545  "reset": true,
00:11:31.545  "nvme_admin": false,
00:11:31.545  "nvme_io": false,
00:11:31.545  "nvme_io_md": false,
00:11:31.545  "write_zeroes": true,
00:11:31.545  "zcopy": true,
00:11:31.545  "get_zone_info": false,
00:11:31.545  "zone_management": false,
00:11:31.545  "zone_append": false,
00:11:31.545  "compare": false,
00:11:31.545  "compare_and_write": false,
00:11:31.545  "abort": true,
00:11:31.545  "seek_hole": false,
00:11:31.545  "seek_data": false,
00:11:31.545  "copy": true,
00:11:31.545  "nvme_iov_md": false
00:11:31.545  },
00:11:31.545  "memory_domains": [
00:11:31.545  {
00:11:31.545  "dma_device_id": "system",
00:11:31.545  "dma_device_type": 1
00:11:31.545  },
00:11:31.545  {
00:11:31.545  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:31.545  "dma_device_type": 2
00:11:31.545  }
00:11:31.545  ],
00:11:31.545  "driver_specific": {}
00:11:31.545  }
00:11:31.545  ]
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:31.545   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:31.545    11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:31.545    11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:31.545    11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:31.545    11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:31.545    11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:31.805   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:31.805    "name": "Existed_Raid",
00:11:31.805    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:31.805    "strip_size_kb": 64,
00:11:31.805    "state": "configuring",
00:11:31.805    "raid_level": "concat",
00:11:31.805    "superblock": false,
00:11:31.805    "num_base_bdevs": 4,
00:11:31.805    "num_base_bdevs_discovered": 2,
00:11:31.805    "num_base_bdevs_operational": 4,
00:11:31.805    "base_bdevs_list": [
00:11:31.805      {
00:11:31.805        "name": "BaseBdev1",
00:11:31.805        "uuid": "739b9322-0cf8-49aa-92a3-485d9c4663fa",
00:11:31.805        "is_configured": true,
00:11:31.805        "data_offset": 0,
00:11:31.805        "data_size": 65536
00:11:31.805      },
00:11:31.805      {
00:11:31.805        "name": "BaseBdev2",
00:11:31.805        "uuid": "ac4b0e80-6869-4d76-b376-94a66d9c5042",
00:11:31.805        "is_configured": true,
00:11:31.805        "data_offset": 0,
00:11:31.805        "data_size": 65536
00:11:31.805      },
00:11:31.805      {
00:11:31.805        "name": "BaseBdev3",
00:11:31.805        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:31.805        "is_configured": false,
00:11:31.805        "data_offset": 0,
00:11:31.805        "data_size": 0
00:11:31.805      },
00:11:31.805      {
00:11:31.805        "name": "BaseBdev4",
00:11:31.805        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:31.805        "is_configured": false,
00:11:31.805        "data_offset": 0,
00:11:31.805        "data_size": 0
00:11:31.805      }
00:11:31.805    ]
00:11:31.805  }'
00:11:31.805   11:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:31.805   11:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.064  [2024-12-16 11:32:58.031273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:32.064  BaseBdev3
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.064  [
00:11:32.064  {
00:11:32.064  "name": "BaseBdev3",
00:11:32.064  "aliases": [
00:11:32.064  "eefb4952-b920-4829-8964-b2f6e9f01439"
00:11:32.064  ],
00:11:32.064  "product_name": "Malloc disk",
00:11:32.064  "block_size": 512,
00:11:32.064  "num_blocks": 65536,
00:11:32.064  "uuid": "eefb4952-b920-4829-8964-b2f6e9f01439",
00:11:32.064  "assigned_rate_limits": {
00:11:32.064  "rw_ios_per_sec": 0,
00:11:32.064  "rw_mbytes_per_sec": 0,
00:11:32.064  "r_mbytes_per_sec": 0,
00:11:32.064  "w_mbytes_per_sec": 0
00:11:32.064  },
00:11:32.064  "claimed": true,
00:11:32.064  "claim_type": "exclusive_write",
00:11:32.064  "zoned": false,
00:11:32.064  "supported_io_types": {
00:11:32.064  "read": true,
00:11:32.064  "write": true,
00:11:32.064  "unmap": true,
00:11:32.064  "flush": true,
00:11:32.064  "reset": true,
00:11:32.064  "nvme_admin": false,
00:11:32.064  "nvme_io": false,
00:11:32.064  "nvme_io_md": false,
00:11:32.064  "write_zeroes": true,
00:11:32.064  "zcopy": true,
00:11:32.064  "get_zone_info": false,
00:11:32.064  "zone_management": false,
00:11:32.064  "zone_append": false,
00:11:32.064  "compare": false,
00:11:32.064  "compare_and_write": false,
00:11:32.064  "abort": true,
00:11:32.064  "seek_hole": false,
00:11:32.064  "seek_data": false,
00:11:32.064  "copy": true,
00:11:32.064  "nvme_iov_md": false
00:11:32.064  },
00:11:32.064  "memory_domains": [
00:11:32.064  {
00:11:32.064  "dma_device_id": "system",
00:11:32.064  "dma_device_type": 1
00:11:32.064  },
00:11:32.064  {
00:11:32.064  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:32.064  "dma_device_type": 2
00:11:32.064  }
00:11:32.064  ],
00:11:32.064  "driver_specific": {}
00:11:32.064  }
00:11:32.064  ]
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:32.064   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:32.065   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:32.065   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:32.065   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:32.065   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:32.065    11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:32.065    11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:32.065    11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.065    11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.065    11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.065   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:32.065    "name": "Existed_Raid",
00:11:32.065    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:32.065    "strip_size_kb": 64,
00:11:32.065    "state": "configuring",
00:11:32.065    "raid_level": "concat",
00:11:32.065    "superblock": false,
00:11:32.065    "num_base_bdevs": 4,
00:11:32.065    "num_base_bdevs_discovered": 3,
00:11:32.065    "num_base_bdevs_operational": 4,
00:11:32.065    "base_bdevs_list": [
00:11:32.065      {
00:11:32.065        "name": "BaseBdev1",
00:11:32.065        "uuid": "739b9322-0cf8-49aa-92a3-485d9c4663fa",
00:11:32.065        "is_configured": true,
00:11:32.065        "data_offset": 0,
00:11:32.065        "data_size": 65536
00:11:32.065      },
00:11:32.065      {
00:11:32.065        "name": "BaseBdev2",
00:11:32.065        "uuid": "ac4b0e80-6869-4d76-b376-94a66d9c5042",
00:11:32.065        "is_configured": true,
00:11:32.065        "data_offset": 0,
00:11:32.065        "data_size": 65536
00:11:32.065      },
00:11:32.065      {
00:11:32.065        "name": "BaseBdev3",
00:11:32.065        "uuid": "eefb4952-b920-4829-8964-b2f6e9f01439",
00:11:32.065        "is_configured": true,
00:11:32.065        "data_offset": 0,
00:11:32.065        "data_size": 65536
00:11:32.065      },
00:11:32.065      {
00:11:32.065        "name": "BaseBdev4",
00:11:32.065        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:32.065        "is_configured": false,
00:11:32.065        "data_offset": 0,
00:11:32.065        "data_size": 0
00:11:32.065      }
00:11:32.065    ]
00:11:32.065  }'
00:11:32.065   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:32.065   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.728  [2024-12-16 11:32:58.549795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:32.728  [2024-12-16 11:32:58.549850] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:11:32.728  [2024-12-16 11:32:58.549861] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:11:32.728  [2024-12-16 11:32:58.550204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:32.728  [2024-12-16 11:32:58.550393] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:11:32.728  [2024-12-16 11:32:58.550423] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:11:32.728  [2024-12-16 11:32:58.550671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:32.728  BaseBdev4
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.728   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.728  [
00:11:32.728  {
00:11:32.728  "name": "BaseBdev4",
00:11:32.728  "aliases": [
00:11:32.728  "4a931ebc-fae9-4c8e-aff9-b93fd158f126"
00:11:32.728  ],
00:11:32.728  "product_name": "Malloc disk",
00:11:32.728  "block_size": 512,
00:11:32.728  "num_blocks": 65536,
00:11:32.728  "uuid": "4a931ebc-fae9-4c8e-aff9-b93fd158f126",
00:11:32.728  "assigned_rate_limits": {
00:11:32.728  "rw_ios_per_sec": 0,
00:11:32.728  "rw_mbytes_per_sec": 0,
00:11:32.728  "r_mbytes_per_sec": 0,
00:11:32.728  "w_mbytes_per_sec": 0
00:11:32.728  },
00:11:32.728  "claimed": true,
00:11:32.729  "claim_type": "exclusive_write",
00:11:32.729  "zoned": false,
00:11:32.729  "supported_io_types": {
00:11:32.729  "read": true,
00:11:32.729  "write": true,
00:11:32.729  "unmap": true,
00:11:32.729  "flush": true,
00:11:32.729  "reset": true,
00:11:32.729  "nvme_admin": false,
00:11:32.729  "nvme_io": false,
00:11:32.729  "nvme_io_md": false,
00:11:32.729  "write_zeroes": true,
00:11:32.729  "zcopy": true,
00:11:32.729  "get_zone_info": false,
00:11:32.729  "zone_management": false,
00:11:32.729  "zone_append": false,
00:11:32.729  "compare": false,
00:11:32.729  "compare_and_write": false,
00:11:32.729  "abort": true,
00:11:32.729  "seek_hole": false,
00:11:32.729  "seek_data": false,
00:11:32.729  "copy": true,
00:11:32.729  "nvme_iov_md": false
00:11:32.729  },
00:11:32.729  "memory_domains": [
00:11:32.729  {
00:11:32.729  "dma_device_id": "system",
00:11:32.729  "dma_device_type": 1
00:11:32.729  },
00:11:32.729  {
00:11:32.729  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:32.729  "dma_device_type": 2
00:11:32.729  }
00:11:32.729  ],
00:11:32.729  "driver_specific": {}
00:11:32.729  }
00:11:32.729  ]
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:32.729    11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:32.729    11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:32.729    11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:32.729    11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:32.729    11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:32.729    "name": "Existed_Raid",
00:11:32.729    "uuid": "cee833cd-3959-4165-a2be-f21be04f95af",
00:11:32.729    "strip_size_kb": 64,
00:11:32.729    "state": "online",
00:11:32.729    "raid_level": "concat",
00:11:32.729    "superblock": false,
00:11:32.729    "num_base_bdevs": 4,
00:11:32.729    "num_base_bdevs_discovered": 4,
00:11:32.729    "num_base_bdevs_operational": 4,
00:11:32.729    "base_bdevs_list": [
00:11:32.729      {
00:11:32.729        "name": "BaseBdev1",
00:11:32.729        "uuid": "739b9322-0cf8-49aa-92a3-485d9c4663fa",
00:11:32.729        "is_configured": true,
00:11:32.729        "data_offset": 0,
00:11:32.729        "data_size": 65536
00:11:32.729      },
00:11:32.729      {
00:11:32.729        "name": "BaseBdev2",
00:11:32.729        "uuid": "ac4b0e80-6869-4d76-b376-94a66d9c5042",
00:11:32.729        "is_configured": true,
00:11:32.729        "data_offset": 0,
00:11:32.729        "data_size": 65536
00:11:32.729      },
00:11:32.729      {
00:11:32.729        "name": "BaseBdev3",
00:11:32.729        "uuid": "eefb4952-b920-4829-8964-b2f6e9f01439",
00:11:32.729        "is_configured": true,
00:11:32.729        "data_offset": 0,
00:11:32.729        "data_size": 65536
00:11:32.729      },
00:11:32.729      {
00:11:32.729        "name": "BaseBdev4",
00:11:32.729        "uuid": "4a931ebc-fae9-4c8e-aff9-b93fd158f126",
00:11:32.729        "is_configured": true,
00:11:32.729        "data_offset": 0,
00:11:32.729        "data_size": 65536
00:11:32.729      }
00:11:32.729    ]
00:11:32.729  }'
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:32.729   11:32:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.297   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:11:33.297   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:33.297   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:33.297   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:33.297   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:33.298  [2024-12-16 11:32:59.077327] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:33.298    "name": "Existed_Raid",
00:11:33.298    "aliases": [
00:11:33.298      "cee833cd-3959-4165-a2be-f21be04f95af"
00:11:33.298    ],
00:11:33.298    "product_name": "Raid Volume",
00:11:33.298    "block_size": 512,
00:11:33.298    "num_blocks": 262144,
00:11:33.298    "uuid": "cee833cd-3959-4165-a2be-f21be04f95af",
00:11:33.298    "assigned_rate_limits": {
00:11:33.298      "rw_ios_per_sec": 0,
00:11:33.298      "rw_mbytes_per_sec": 0,
00:11:33.298      "r_mbytes_per_sec": 0,
00:11:33.298      "w_mbytes_per_sec": 0
00:11:33.298    },
00:11:33.298    "claimed": false,
00:11:33.298    "zoned": false,
00:11:33.298    "supported_io_types": {
00:11:33.298      "read": true,
00:11:33.298      "write": true,
00:11:33.298      "unmap": true,
00:11:33.298      "flush": true,
00:11:33.298      "reset": true,
00:11:33.298      "nvme_admin": false,
00:11:33.298      "nvme_io": false,
00:11:33.298      "nvme_io_md": false,
00:11:33.298      "write_zeroes": true,
00:11:33.298      "zcopy": false,
00:11:33.298      "get_zone_info": false,
00:11:33.298      "zone_management": false,
00:11:33.298      "zone_append": false,
00:11:33.298      "compare": false,
00:11:33.298      "compare_and_write": false,
00:11:33.298      "abort": false,
00:11:33.298      "seek_hole": false,
00:11:33.298      "seek_data": false,
00:11:33.298      "copy": false,
00:11:33.298      "nvme_iov_md": false
00:11:33.298    },
00:11:33.298    "memory_domains": [
00:11:33.298      {
00:11:33.298        "dma_device_id": "system",
00:11:33.298        "dma_device_type": 1
00:11:33.298      },
00:11:33.298      {
00:11:33.298        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:33.298        "dma_device_type": 2
00:11:33.298      },
00:11:33.298      {
00:11:33.298        "dma_device_id": "system",
00:11:33.298        "dma_device_type": 1
00:11:33.298      },
00:11:33.298      {
00:11:33.298        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:33.298        "dma_device_type": 2
00:11:33.298      },
00:11:33.298      {
00:11:33.298        "dma_device_id": "system",
00:11:33.298        "dma_device_type": 1
00:11:33.298      },
00:11:33.298      {
00:11:33.298        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:33.298        "dma_device_type": 2
00:11:33.298      },
00:11:33.298      {
00:11:33.298        "dma_device_id": "system",
00:11:33.298        "dma_device_type": 1
00:11:33.298      },
00:11:33.298      {
00:11:33.298        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:33.298        "dma_device_type": 2
00:11:33.298      }
00:11:33.298    ],
00:11:33.298    "driver_specific": {
00:11:33.298      "raid": {
00:11:33.298        "uuid": "cee833cd-3959-4165-a2be-f21be04f95af",
00:11:33.298        "strip_size_kb": 64,
00:11:33.298        "state": "online",
00:11:33.298        "raid_level": "concat",
00:11:33.298        "superblock": false,
00:11:33.298        "num_base_bdevs": 4,
00:11:33.298        "num_base_bdevs_discovered": 4,
00:11:33.298        "num_base_bdevs_operational": 4,
00:11:33.298        "base_bdevs_list": [
00:11:33.298          {
00:11:33.298            "name": "BaseBdev1",
00:11:33.298            "uuid": "739b9322-0cf8-49aa-92a3-485d9c4663fa",
00:11:33.298            "is_configured": true,
00:11:33.298            "data_offset": 0,
00:11:33.298            "data_size": 65536
00:11:33.298          },
00:11:33.298          {
00:11:33.298            "name": "BaseBdev2",
00:11:33.298            "uuid": "ac4b0e80-6869-4d76-b376-94a66d9c5042",
00:11:33.298            "is_configured": true,
00:11:33.298            "data_offset": 0,
00:11:33.298            "data_size": 65536
00:11:33.298          },
00:11:33.298          {
00:11:33.298            "name": "BaseBdev3",
00:11:33.298            "uuid": "eefb4952-b920-4829-8964-b2f6e9f01439",
00:11:33.298            "is_configured": true,
00:11:33.298            "data_offset": 0,
00:11:33.298            "data_size": 65536
00:11:33.298          },
00:11:33.298          {
00:11:33.298            "name": "BaseBdev4",
00:11:33.298            "uuid": "4a931ebc-fae9-4c8e-aff9-b93fd158f126",
00:11:33.298            "is_configured": true,
00:11:33.298            "data_offset": 0,
00:11:33.298            "data_size": 65536
00:11:33.298          }
00:11:33.298        ]
00:11:33.298      }
00:11:33.298    }
00:11:33.298  }'
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:11:33.298  BaseBdev2
00:11:33.298  BaseBdev3
00:11:33.298  BaseBdev4'
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:33.298   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.298    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.558    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.558  [2024-12-16 11:32:59.400521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:33.558  [2024-12-16 11:32:59.400572] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:33.558  [2024-12-16 11:32:59.400629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:33.558    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:33.558    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:33.558    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.558    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.558    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:33.558   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:33.558    "name": "Existed_Raid",
00:11:33.558    "uuid": "cee833cd-3959-4165-a2be-f21be04f95af",
00:11:33.558    "strip_size_kb": 64,
00:11:33.558    "state": "offline",
00:11:33.558    "raid_level": "concat",
00:11:33.558    "superblock": false,
00:11:33.558    "num_base_bdevs": 4,
00:11:33.558    "num_base_bdevs_discovered": 3,
00:11:33.558    "num_base_bdevs_operational": 3,
00:11:33.559    "base_bdevs_list": [
00:11:33.559      {
00:11:33.559        "name": null,
00:11:33.559        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:33.559        "is_configured": false,
00:11:33.559        "data_offset": 0,
00:11:33.559        "data_size": 65536
00:11:33.559      },
00:11:33.559      {
00:11:33.559        "name": "BaseBdev2",
00:11:33.559        "uuid": "ac4b0e80-6869-4d76-b376-94a66d9c5042",
00:11:33.559        "is_configured": true,
00:11:33.559        "data_offset": 0,
00:11:33.559        "data_size": 65536
00:11:33.559      },
00:11:33.559      {
00:11:33.559        "name": "BaseBdev3",
00:11:33.559        "uuid": "eefb4952-b920-4829-8964-b2f6e9f01439",
00:11:33.559        "is_configured": true,
00:11:33.559        "data_offset": 0,
00:11:33.559        "data_size": 65536
00:11:33.559      },
00:11:33.559      {
00:11:33.559        "name": "BaseBdev4",
00:11:33.559        "uuid": "4a931ebc-fae9-4c8e-aff9-b93fd158f126",
00:11:33.559        "is_configured": true,
00:11:33.559        "data_offset": 0,
00:11:33.559        "data_size": 65536
00:11:33.559      }
00:11:33.559    ]
00:11:33.559  }'
00:11:33.559   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:33.559   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.820   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:11:33.820   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:33.820    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:33.820    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:33.820    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:33.820    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:33.820    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.085  [2024-12-16 11:32:59.895483] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.085  [2024-12-16 11:32:59.967043] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:34.085   11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.085    11:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.085    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.085  [2024-12-16 11:33:00.022822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:11:34.085  [2024-12-16 11:33:00.022880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:34.085    11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:34.085    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.085    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.085    11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:11:34.085    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.085  BaseBdev2
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:34.085   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.086  [
00:11:34.086  {
00:11:34.086  "name": "BaseBdev2",
00:11:34.086  "aliases": [
00:11:34.086  "e97e13a8-cd16-4be3-b406-f214b4e2b0ce"
00:11:34.086  ],
00:11:34.086  "product_name": "Malloc disk",
00:11:34.086  "block_size": 512,
00:11:34.086  "num_blocks": 65536,
00:11:34.086  "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:34.086  "assigned_rate_limits": {
00:11:34.086  "rw_ios_per_sec": 0,
00:11:34.086  "rw_mbytes_per_sec": 0,
00:11:34.086  "r_mbytes_per_sec": 0,
00:11:34.086  "w_mbytes_per_sec": 0
00:11:34.086  },
00:11:34.086  "claimed": false,
00:11:34.086  "zoned": false,
00:11:34.086  "supported_io_types": {
00:11:34.086  "read": true,
00:11:34.086  "write": true,
00:11:34.086  "unmap": true,
00:11:34.086  "flush": true,
00:11:34.086  "reset": true,
00:11:34.086  "nvme_admin": false,
00:11:34.086  "nvme_io": false,
00:11:34.086  "nvme_io_md": false,
00:11:34.086  "write_zeroes": true,
00:11:34.086  "zcopy": true,
00:11:34.086  "get_zone_info": false,
00:11:34.086  "zone_management": false,
00:11:34.086  "zone_append": false,
00:11:34.086  "compare": false,
00:11:34.086  "compare_and_write": false,
00:11:34.086  "abort": true,
00:11:34.086  "seek_hole": false,
00:11:34.086  "seek_data": false,
00:11:34.086  "copy": true,
00:11:34.086  "nvme_iov_md": false
00:11:34.086  },
00:11:34.086  "memory_domains": [
00:11:34.086  {
00:11:34.086  "dma_device_id": "system",
00:11:34.086  "dma_device_type": 1
00:11:34.086  },
00:11:34.086  {
00:11:34.086  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:34.086  "dma_device_type": 2
00:11:34.086  }
00:11:34.086  ],
00:11:34.086  "driver_specific": {}
00:11:34.086  }
00:11:34.086  ]
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.086   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.345  BaseBdev3
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:34.345   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.346  [
00:11:34.346  {
00:11:34.346  "name": "BaseBdev3",
00:11:34.346  "aliases": [
00:11:34.346  "c423997d-7ca6-4af7-b3dd-aa4aab291992"
00:11:34.346  ],
00:11:34.346  "product_name": "Malloc disk",
00:11:34.346  "block_size": 512,
00:11:34.346  "num_blocks": 65536,
00:11:34.346  "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:34.346  "assigned_rate_limits": {
00:11:34.346  "rw_ios_per_sec": 0,
00:11:34.346  "rw_mbytes_per_sec": 0,
00:11:34.346  "r_mbytes_per_sec": 0,
00:11:34.346  "w_mbytes_per_sec": 0
00:11:34.346  },
00:11:34.346  "claimed": false,
00:11:34.346  "zoned": false,
00:11:34.346  "supported_io_types": {
00:11:34.346  "read": true,
00:11:34.346  "write": true,
00:11:34.346  "unmap": true,
00:11:34.346  "flush": true,
00:11:34.346  "reset": true,
00:11:34.346  "nvme_admin": false,
00:11:34.346  "nvme_io": false,
00:11:34.346  "nvme_io_md": false,
00:11:34.346  "write_zeroes": true,
00:11:34.346  "zcopy": true,
00:11:34.346  "get_zone_info": false,
00:11:34.346  "zone_management": false,
00:11:34.346  "zone_append": false,
00:11:34.346  "compare": false,
00:11:34.346  "compare_and_write": false,
00:11:34.346  "abort": true,
00:11:34.346  "seek_hole": false,
00:11:34.346  "seek_data": false,
00:11:34.346  "copy": true,
00:11:34.346  "nvme_iov_md": false
00:11:34.346  },
00:11:34.346  "memory_domains": [
00:11:34.346  {
00:11:34.346  "dma_device_id": "system",
00:11:34.346  "dma_device_type": 1
00:11:34.346  },
00:11:34.346  {
00:11:34.346  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:34.346  "dma_device_type": 2
00:11:34.346  }
00:11:34.346  ],
00:11:34.346  "driver_specific": {}
00:11:34.346  }
00:11:34.346  ]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.346  BaseBdev4
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.346  [
00:11:34.346  {
00:11:34.346  "name": "BaseBdev4",
00:11:34.346  "aliases": [
00:11:34.346  "857cc2d3-055a-4943-90e6-c0137184621a"
00:11:34.346  ],
00:11:34.346  "product_name": "Malloc disk",
00:11:34.346  "block_size": 512,
00:11:34.346  "num_blocks": 65536,
00:11:34.346  "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:34.346  "assigned_rate_limits": {
00:11:34.346  "rw_ios_per_sec": 0,
00:11:34.346  "rw_mbytes_per_sec": 0,
00:11:34.346  "r_mbytes_per_sec": 0,
00:11:34.346  "w_mbytes_per_sec": 0
00:11:34.346  },
00:11:34.346  "claimed": false,
00:11:34.346  "zoned": false,
00:11:34.346  "supported_io_types": {
00:11:34.346  "read": true,
00:11:34.346  "write": true,
00:11:34.346  "unmap": true,
00:11:34.346  "flush": true,
00:11:34.346  "reset": true,
00:11:34.346  "nvme_admin": false,
00:11:34.346  "nvme_io": false,
00:11:34.346  "nvme_io_md": false,
00:11:34.346  "write_zeroes": true,
00:11:34.346  "zcopy": true,
00:11:34.346  "get_zone_info": false,
00:11:34.346  "zone_management": false,
00:11:34.346  "zone_append": false,
00:11:34.346  "compare": false,
00:11:34.346  "compare_and_write": false,
00:11:34.346  "abort": true,
00:11:34.346  "seek_hole": false,
00:11:34.346  "seek_data": false,
00:11:34.346  "copy": true,
00:11:34.346  "nvme_iov_md": false
00:11:34.346  },
00:11:34.346  "memory_domains": [
00:11:34.346  {
00:11:34.346  "dma_device_id": "system",
00:11:34.346  "dma_device_type": 1
00:11:34.346  },
00:11:34.346  {
00:11:34.346  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:34.346  "dma_device_type": 2
00:11:34.346  }
00:11:34.346  ],
00:11:34.346  "driver_specific": {}
00:11:34.346  }
00:11:34.346  ]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.346  [2024-12-16 11:33:00.253195] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:34.346  [2024-12-16 11:33:00.253245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:34.346  [2024-12-16 11:33:00.253265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:34.346  [2024-12-16 11:33:00.255103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:34.346  [2024-12-16 11:33:00.255157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:34.346    11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:34.346    11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:34.346    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.346    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.346    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:34.346    "name": "Existed_Raid",
00:11:34.346    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:34.346    "strip_size_kb": 64,
00:11:34.346    "state": "configuring",
00:11:34.346    "raid_level": "concat",
00:11:34.346    "superblock": false,
00:11:34.346    "num_base_bdevs": 4,
00:11:34.346    "num_base_bdevs_discovered": 3,
00:11:34.346    "num_base_bdevs_operational": 4,
00:11:34.346    "base_bdevs_list": [
00:11:34.346      {
00:11:34.346        "name": "BaseBdev1",
00:11:34.346        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:34.346        "is_configured": false,
00:11:34.346        "data_offset": 0,
00:11:34.346        "data_size": 0
00:11:34.346      },
00:11:34.346      {
00:11:34.346        "name": "BaseBdev2",
00:11:34.346        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:34.346        "is_configured": true,
00:11:34.346        "data_offset": 0,
00:11:34.346        "data_size": 65536
00:11:34.346      },
00:11:34.346      {
00:11:34.346        "name": "BaseBdev3",
00:11:34.346        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:34.346        "is_configured": true,
00:11:34.346        "data_offset": 0,
00:11:34.346        "data_size": 65536
00:11:34.346      },
00:11:34.346      {
00:11:34.346        "name": "BaseBdev4",
00:11:34.346        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:34.346        "is_configured": true,
00:11:34.346        "data_offset": 0,
00:11:34.346        "data_size": 65536
00:11:34.346      }
00:11:34.346    ]
00:11:34.346  }'
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:34.346   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.915  [2024-12-16 11:33:00.736411] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:34.915    11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:34.915    11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:34.915    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:34.915    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:34.915    11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:34.915   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:34.915    "name": "Existed_Raid",
00:11:34.915    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:34.915    "strip_size_kb": 64,
00:11:34.915    "state": "configuring",
00:11:34.915    "raid_level": "concat",
00:11:34.915    "superblock": false,
00:11:34.915    "num_base_bdevs": 4,
00:11:34.915    "num_base_bdevs_discovered": 2,
00:11:34.915    "num_base_bdevs_operational": 4,
00:11:34.915    "base_bdevs_list": [
00:11:34.915      {
00:11:34.915        "name": "BaseBdev1",
00:11:34.915        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:34.915        "is_configured": false,
00:11:34.915        "data_offset": 0,
00:11:34.915        "data_size": 0
00:11:34.915      },
00:11:34.915      {
00:11:34.915        "name": null,
00:11:34.915        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:34.915        "is_configured": false,
00:11:34.915        "data_offset": 0,
00:11:34.915        "data_size": 65536
00:11:34.915      },
00:11:34.915      {
00:11:34.915        "name": "BaseBdev3",
00:11:34.915        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:34.915        "is_configured": true,
00:11:34.915        "data_offset": 0,
00:11:34.915        "data_size": 65536
00:11:34.915      },
00:11:34.915      {
00:11:34.915        "name": "BaseBdev4",
00:11:34.916        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:34.916        "is_configured": true,
00:11:34.916        "data_offset": 0,
00:11:34.916        "data_size": 65536
00:11:34.916      }
00:11:34.916    ]
00:11:34.916  }'
00:11:34.916   11:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:34.916   11:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:35.175    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:35.175    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:35.175    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.175    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:35.175    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:35.175  [2024-12-16 11:33:01.234749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:35.175  BaseBdev1
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.175   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:35.434   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.434   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:35.435  [
00:11:35.435  {
00:11:35.435  "name": "BaseBdev1",
00:11:35.435  "aliases": [
00:11:35.435  "af5c05a6-c66a-4a32-a527-58b400c40e3a"
00:11:35.435  ],
00:11:35.435  "product_name": "Malloc disk",
00:11:35.435  "block_size": 512,
00:11:35.435  "num_blocks": 65536,
00:11:35.435  "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:35.435  "assigned_rate_limits": {
00:11:35.435  "rw_ios_per_sec": 0,
00:11:35.435  "rw_mbytes_per_sec": 0,
00:11:35.435  "r_mbytes_per_sec": 0,
00:11:35.435  "w_mbytes_per_sec": 0
00:11:35.435  },
00:11:35.435  "claimed": true,
00:11:35.435  "claim_type": "exclusive_write",
00:11:35.435  "zoned": false,
00:11:35.435  "supported_io_types": {
00:11:35.435  "read": true,
00:11:35.435  "write": true,
00:11:35.435  "unmap": true,
00:11:35.435  "flush": true,
00:11:35.435  "reset": true,
00:11:35.435  "nvme_admin": false,
00:11:35.435  "nvme_io": false,
00:11:35.435  "nvme_io_md": false,
00:11:35.435  "write_zeroes": true,
00:11:35.435  "zcopy": true,
00:11:35.435  "get_zone_info": false,
00:11:35.435  "zone_management": false,
00:11:35.435  "zone_append": false,
00:11:35.435  "compare": false,
00:11:35.435  "compare_and_write": false,
00:11:35.435  "abort": true,
00:11:35.435  "seek_hole": false,
00:11:35.435  "seek_data": false,
00:11:35.435  "copy": true,
00:11:35.435  "nvme_iov_md": false
00:11:35.435  },
00:11:35.435  "memory_domains": [
00:11:35.435  {
00:11:35.435  "dma_device_id": "system",
00:11:35.435  "dma_device_type": 1
00:11:35.435  },
00:11:35.435  {
00:11:35.435  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:35.435  "dma_device_type": 2
00:11:35.435  }
00:11:35.435  ],
00:11:35.435  "driver_specific": {}
00:11:35.435  }
00:11:35.435  ]
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:35.435    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:35.435    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:35.435    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:35.435    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:35.435    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:35.435    "name": "Existed_Raid",
00:11:35.435    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:35.435    "strip_size_kb": 64,
00:11:35.435    "state": "configuring",
00:11:35.435    "raid_level": "concat",
00:11:35.435    "superblock": false,
00:11:35.435    "num_base_bdevs": 4,
00:11:35.435    "num_base_bdevs_discovered": 3,
00:11:35.435    "num_base_bdevs_operational": 4,
00:11:35.435    "base_bdevs_list": [
00:11:35.435      {
00:11:35.435        "name": "BaseBdev1",
00:11:35.435        "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:35.435        "is_configured": true,
00:11:35.435        "data_offset": 0,
00:11:35.435        "data_size": 65536
00:11:35.435      },
00:11:35.435      {
00:11:35.435        "name": null,
00:11:35.435        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:35.435        "is_configured": false,
00:11:35.435        "data_offset": 0,
00:11:35.435        "data_size": 65536
00:11:35.435      },
00:11:35.435      {
00:11:35.435        "name": "BaseBdev3",
00:11:35.435        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:35.435        "is_configured": true,
00:11:35.435        "data_offset": 0,
00:11:35.435        "data_size": 65536
00:11:35.435      },
00:11:35.435      {
00:11:35.435        "name": "BaseBdev4",
00:11:35.435        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:35.435        "is_configured": true,
00:11:35.435        "data_offset": 0,
00:11:35.435        "data_size": 65536
00:11:35.435      }
00:11:35.435    ]
00:11:35.435  }'
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:35.435   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.003  [2024-12-16 11:33:01.821798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.003    11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.003   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:36.003    "name": "Existed_Raid",
00:11:36.004    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:36.004    "strip_size_kb": 64,
00:11:36.004    "state": "configuring",
00:11:36.004    "raid_level": "concat",
00:11:36.004    "superblock": false,
00:11:36.004    "num_base_bdevs": 4,
00:11:36.004    "num_base_bdevs_discovered": 2,
00:11:36.004    "num_base_bdevs_operational": 4,
00:11:36.004    "base_bdevs_list": [
00:11:36.004      {
00:11:36.004        "name": "BaseBdev1",
00:11:36.004        "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:36.004        "is_configured": true,
00:11:36.004        "data_offset": 0,
00:11:36.004        "data_size": 65536
00:11:36.004      },
00:11:36.004      {
00:11:36.004        "name": null,
00:11:36.004        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:36.004        "is_configured": false,
00:11:36.004        "data_offset": 0,
00:11:36.004        "data_size": 65536
00:11:36.004      },
00:11:36.004      {
00:11:36.004        "name": null,
00:11:36.004        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:36.004        "is_configured": false,
00:11:36.004        "data_offset": 0,
00:11:36.004        "data_size": 65536
00:11:36.004      },
00:11:36.004      {
00:11:36.004        "name": "BaseBdev4",
00:11:36.004        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:36.004        "is_configured": true,
00:11:36.004        "data_offset": 0,
00:11:36.004        "data_size": 65536
00:11:36.004      }
00:11:36.004    ]
00:11:36.004  }'
00:11:36.004   11:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:36.004   11:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.263  [2024-12-16 11:33:02.305059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:36.263   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:36.263    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.522    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.522   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:36.522    "name": "Existed_Raid",
00:11:36.522    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:36.522    "strip_size_kb": 64,
00:11:36.522    "state": "configuring",
00:11:36.522    "raid_level": "concat",
00:11:36.522    "superblock": false,
00:11:36.522    "num_base_bdevs": 4,
00:11:36.522    "num_base_bdevs_discovered": 3,
00:11:36.522    "num_base_bdevs_operational": 4,
00:11:36.522    "base_bdevs_list": [
00:11:36.522      {
00:11:36.522        "name": "BaseBdev1",
00:11:36.522        "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:36.522        "is_configured": true,
00:11:36.522        "data_offset": 0,
00:11:36.522        "data_size": 65536
00:11:36.522      },
00:11:36.522      {
00:11:36.522        "name": null,
00:11:36.522        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:36.522        "is_configured": false,
00:11:36.522        "data_offset": 0,
00:11:36.522        "data_size": 65536
00:11:36.522      },
00:11:36.522      {
00:11:36.522        "name": "BaseBdev3",
00:11:36.522        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:36.522        "is_configured": true,
00:11:36.522        "data_offset": 0,
00:11:36.522        "data_size": 65536
00:11:36.522      },
00:11:36.522      {
00:11:36.522        "name": "BaseBdev4",
00:11:36.522        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:36.522        "is_configured": true,
00:11:36.522        "data_offset": 0,
00:11:36.522        "data_size": 65536
00:11:36.522      }
00:11:36.522    ]
00:11:36.522  }'
00:11:36.522   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:36.522   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.781    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:36.781    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.781    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.781    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:36.781    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.781  [2024-12-16 11:33:02.800256] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:36.781   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:36.782   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:36.782   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:36.782    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:36.782    11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:36.782    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:36.782    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:36.782    11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.041   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:37.041    "name": "Existed_Raid",
00:11:37.041    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:37.041    "strip_size_kb": 64,
00:11:37.041    "state": "configuring",
00:11:37.041    "raid_level": "concat",
00:11:37.041    "superblock": false,
00:11:37.041    "num_base_bdevs": 4,
00:11:37.041    "num_base_bdevs_discovered": 2,
00:11:37.041    "num_base_bdevs_operational": 4,
00:11:37.041    "base_bdevs_list": [
00:11:37.041      {
00:11:37.041        "name": null,
00:11:37.041        "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:37.041        "is_configured": false,
00:11:37.041        "data_offset": 0,
00:11:37.041        "data_size": 65536
00:11:37.041      },
00:11:37.041      {
00:11:37.041        "name": null,
00:11:37.041        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:37.041        "is_configured": false,
00:11:37.041        "data_offset": 0,
00:11:37.041        "data_size": 65536
00:11:37.041      },
00:11:37.041      {
00:11:37.041        "name": "BaseBdev3",
00:11:37.041        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:37.041        "is_configured": true,
00:11:37.041        "data_offset": 0,
00:11:37.041        "data_size": 65536
00:11:37.041      },
00:11:37.041      {
00:11:37.041        "name": "BaseBdev4",
00:11:37.041        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:37.041        "is_configured": true,
00:11:37.041        "data_offset": 0,
00:11:37.041        "data_size": 65536
00:11:37.041      }
00:11:37.041    ]
00:11:37.041  }'
00:11:37.041   11:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:37.041   11:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.300  [2024-12-16 11:33:03.274302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.300    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:37.300    "name": "Existed_Raid",
00:11:37.300    "uuid": "00000000-0000-0000-0000-000000000000",
00:11:37.300    "strip_size_kb": 64,
00:11:37.300    "state": "configuring",
00:11:37.300    "raid_level": "concat",
00:11:37.300    "superblock": false,
00:11:37.300    "num_base_bdevs": 4,
00:11:37.300    "num_base_bdevs_discovered": 3,
00:11:37.300    "num_base_bdevs_operational": 4,
00:11:37.300    "base_bdevs_list": [
00:11:37.300      {
00:11:37.300        "name": null,
00:11:37.300        "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:37.300        "is_configured": false,
00:11:37.300        "data_offset": 0,
00:11:37.300        "data_size": 65536
00:11:37.300      },
00:11:37.300      {
00:11:37.300        "name": "BaseBdev2",
00:11:37.300        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:37.300        "is_configured": true,
00:11:37.300        "data_offset": 0,
00:11:37.300        "data_size": 65536
00:11:37.300      },
00:11:37.300      {
00:11:37.300        "name": "BaseBdev3",
00:11:37.300        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:37.300        "is_configured": true,
00:11:37.300        "data_offset": 0,
00:11:37.300        "data_size": 65536
00:11:37.300      },
00:11:37.300      {
00:11:37.300        "name": "BaseBdev4",
00:11:37.300        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:37.300        "is_configured": true,
00:11:37.300        "data_offset": 0,
00:11:37.300        "data_size": 65536
00:11:37.300      }
00:11:37.300    ]
00:11:37.300  }'
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:37.300   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u af5c05a6-c66a-4a32-a527-58b400c40e3a
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.869  [2024-12-16 11:33:03.844395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:11:37.869  [2024-12-16 11:33:03.844448] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:11:37.869  [2024-12-16 11:33:03.844457] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:11:37.869  [2024-12-16 11:33:03.844736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:11:37.869  [2024-12-16 11:33:03.844890] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:11:37.869  [2024-12-16 11:33:03.844908] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:11:37.869  [2024-12-16 11:33:03.845093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:37.869  NewBaseBdev
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.869  [
00:11:37.869  {
00:11:37.869  "name": "NewBaseBdev",
00:11:37.869  "aliases": [
00:11:37.869  "af5c05a6-c66a-4a32-a527-58b400c40e3a"
00:11:37.869  ],
00:11:37.869  "product_name": "Malloc disk",
00:11:37.869  "block_size": 512,
00:11:37.869  "num_blocks": 65536,
00:11:37.869  "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:37.869  "assigned_rate_limits": {
00:11:37.869  "rw_ios_per_sec": 0,
00:11:37.869  "rw_mbytes_per_sec": 0,
00:11:37.869  "r_mbytes_per_sec": 0,
00:11:37.869  "w_mbytes_per_sec": 0
00:11:37.869  },
00:11:37.869  "claimed": true,
00:11:37.869  "claim_type": "exclusive_write",
00:11:37.869  "zoned": false,
00:11:37.869  "supported_io_types": {
00:11:37.869  "read": true,
00:11:37.869  "write": true,
00:11:37.869  "unmap": true,
00:11:37.869  "flush": true,
00:11:37.869  "reset": true,
00:11:37.869  "nvme_admin": false,
00:11:37.869  "nvme_io": false,
00:11:37.869  "nvme_io_md": false,
00:11:37.869  "write_zeroes": true,
00:11:37.869  "zcopy": true,
00:11:37.869  "get_zone_info": false,
00:11:37.869  "zone_management": false,
00:11:37.869  "zone_append": false,
00:11:37.869  "compare": false,
00:11:37.869  "compare_and_write": false,
00:11:37.869  "abort": true,
00:11:37.869  "seek_hole": false,
00:11:37.869  "seek_data": false,
00:11:37.869  "copy": true,
00:11:37.869  "nvme_iov_md": false
00:11:37.869  },
00:11:37.869  "memory_domains": [
00:11:37.869  {
00:11:37.869  "dma_device_id": "system",
00:11:37.869  "dma_device_type": 1
00:11:37.869  },
00:11:37.869  {
00:11:37.869  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:37.869  "dma_device_type": 2
00:11:37.869  }
00:11:37.869  ],
00:11:37.869  "driver_specific": {}
00:11:37.869  }
00:11:37.869  ]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:37.869    11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:37.869    "name": "Existed_Raid",
00:11:37.869    "uuid": "45f92f4d-86f0-4f14-b0f8-b339dcc9c01e",
00:11:37.869    "strip_size_kb": 64,
00:11:37.869    "state": "online",
00:11:37.869    "raid_level": "concat",
00:11:37.869    "superblock": false,
00:11:37.869    "num_base_bdevs": 4,
00:11:37.869    "num_base_bdevs_discovered": 4,
00:11:37.869    "num_base_bdevs_operational": 4,
00:11:37.869    "base_bdevs_list": [
00:11:37.869      {
00:11:37.869        "name": "NewBaseBdev",
00:11:37.869        "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:37.869        "is_configured": true,
00:11:37.869        "data_offset": 0,
00:11:37.869        "data_size": 65536
00:11:37.869      },
00:11:37.869      {
00:11:37.869        "name": "BaseBdev2",
00:11:37.869        "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:37.869        "is_configured": true,
00:11:37.869        "data_offset": 0,
00:11:37.869        "data_size": 65536
00:11:37.869      },
00:11:37.869      {
00:11:37.869        "name": "BaseBdev3",
00:11:37.869        "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:37.869        "is_configured": true,
00:11:37.869        "data_offset": 0,
00:11:37.869        "data_size": 65536
00:11:37.869      },
00:11:37.869      {
00:11:37.869        "name": "BaseBdev4",
00:11:37.869        "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:37.869        "is_configured": true,
00:11:37.869        "data_offset": 0,
00:11:37.869        "data_size": 65536
00:11:37.869      }
00:11:37.869    ]
00:11:37.869  }'
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:37.869   11:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:38.438  [2024-12-16 11:33:04.323986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:38.438    "name": "Existed_Raid",
00:11:38.438    "aliases": [
00:11:38.438      "45f92f4d-86f0-4f14-b0f8-b339dcc9c01e"
00:11:38.438    ],
00:11:38.438    "product_name": "Raid Volume",
00:11:38.438    "block_size": 512,
00:11:38.438    "num_blocks": 262144,
00:11:38.438    "uuid": "45f92f4d-86f0-4f14-b0f8-b339dcc9c01e",
00:11:38.438    "assigned_rate_limits": {
00:11:38.438      "rw_ios_per_sec": 0,
00:11:38.438      "rw_mbytes_per_sec": 0,
00:11:38.438      "r_mbytes_per_sec": 0,
00:11:38.438      "w_mbytes_per_sec": 0
00:11:38.438    },
00:11:38.438    "claimed": false,
00:11:38.438    "zoned": false,
00:11:38.438    "supported_io_types": {
00:11:38.438      "read": true,
00:11:38.438      "write": true,
00:11:38.438      "unmap": true,
00:11:38.438      "flush": true,
00:11:38.438      "reset": true,
00:11:38.438      "nvme_admin": false,
00:11:38.438      "nvme_io": false,
00:11:38.438      "nvme_io_md": false,
00:11:38.438      "write_zeroes": true,
00:11:38.438      "zcopy": false,
00:11:38.438      "get_zone_info": false,
00:11:38.438      "zone_management": false,
00:11:38.438      "zone_append": false,
00:11:38.438      "compare": false,
00:11:38.438      "compare_and_write": false,
00:11:38.438      "abort": false,
00:11:38.438      "seek_hole": false,
00:11:38.438      "seek_data": false,
00:11:38.438      "copy": false,
00:11:38.438      "nvme_iov_md": false
00:11:38.438    },
00:11:38.438    "memory_domains": [
00:11:38.438      {
00:11:38.438        "dma_device_id": "system",
00:11:38.438        "dma_device_type": 1
00:11:38.438      },
00:11:38.438      {
00:11:38.438        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:38.438        "dma_device_type": 2
00:11:38.438      },
00:11:38.438      {
00:11:38.438        "dma_device_id": "system",
00:11:38.438        "dma_device_type": 1
00:11:38.438      },
00:11:38.438      {
00:11:38.438        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:38.438        "dma_device_type": 2
00:11:38.438      },
00:11:38.438      {
00:11:38.438        "dma_device_id": "system",
00:11:38.438        "dma_device_type": 1
00:11:38.438      },
00:11:38.438      {
00:11:38.438        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:38.438        "dma_device_type": 2
00:11:38.438      },
00:11:38.438      {
00:11:38.438        "dma_device_id": "system",
00:11:38.438        "dma_device_type": 1
00:11:38.438      },
00:11:38.438      {
00:11:38.438        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:38.438        "dma_device_type": 2
00:11:38.438      }
00:11:38.438    ],
00:11:38.438    "driver_specific": {
00:11:38.438      "raid": {
00:11:38.438        "uuid": "45f92f4d-86f0-4f14-b0f8-b339dcc9c01e",
00:11:38.438        "strip_size_kb": 64,
00:11:38.438        "state": "online",
00:11:38.438        "raid_level": "concat",
00:11:38.438        "superblock": false,
00:11:38.438        "num_base_bdevs": 4,
00:11:38.438        "num_base_bdevs_discovered": 4,
00:11:38.438        "num_base_bdevs_operational": 4,
00:11:38.438        "base_bdevs_list": [
00:11:38.438          {
00:11:38.438            "name": "NewBaseBdev",
00:11:38.438            "uuid": "af5c05a6-c66a-4a32-a527-58b400c40e3a",
00:11:38.438            "is_configured": true,
00:11:38.438            "data_offset": 0,
00:11:38.438            "data_size": 65536
00:11:38.438          },
00:11:38.438          {
00:11:38.438            "name": "BaseBdev2",
00:11:38.438            "uuid": "e97e13a8-cd16-4be3-b406-f214b4e2b0ce",
00:11:38.438            "is_configured": true,
00:11:38.438            "data_offset": 0,
00:11:38.438            "data_size": 65536
00:11:38.438          },
00:11:38.438          {
00:11:38.438            "name": "BaseBdev3",
00:11:38.438            "uuid": "c423997d-7ca6-4af7-b3dd-aa4aab291992",
00:11:38.438            "is_configured": true,
00:11:38.438            "data_offset": 0,
00:11:38.438            "data_size": 65536
00:11:38.438          },
00:11:38.438          {
00:11:38.438            "name": "BaseBdev4",
00:11:38.438            "uuid": "857cc2d3-055a-4943-90e6-c0137184621a",
00:11:38.438            "is_configured": true,
00:11:38.438            "data_offset": 0,
00:11:38.438            "data_size": 65536
00:11:38.438          }
00:11:38.438        ]
00:11:38.438      }
00:11:38.438    }
00:11:38.438  }'
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:11:38.438  BaseBdev2
00:11:38.438  BaseBdev3
00:11:38.438  BaseBdev4'
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:38.438   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.438    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.697   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:38.697   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:38.697   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.698  [2024-12-16 11:33:04.607174] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:38.698  [2024-12-16 11:33:04.607206] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:38.698  [2024-12-16 11:33:04.607308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:38.698  [2024-12-16 11:33:04.607379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:38.698  [2024-12-16 11:33:04.607391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82495
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82495 ']'
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82495
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:38.698    11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82495
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:38.698  killing process with pid 82495
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82495'
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82495
00:11:38.698  [2024-12-16 11:33:04.652416] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:38.698   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82495
00:11:38.698  [2024-12-16 11:33:04.694350] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:38.956   11:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:11:38.956  
00:11:38.956  real	0m9.819s
00:11:38.956  user	0m16.810s
00:11:38.956  sys	0m2.042s
00:11:38.956   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:38.956   11:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:11:38.956  ************************************
00:11:38.956  END TEST raid_state_function_test
00:11:38.956  ************************************
00:11:38.956   11:33:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true
00:11:38.956   11:33:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:38.956   11:33:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:38.956   11:33:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:38.956  ************************************
00:11:38.956  START TEST raid_state_function_test_sb
00:11:38.956  ************************************
00:11:38.956   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true
00:11:38.956   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat
00:11:38.956   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:11:38.956   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:11:38.956   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:11:38.956    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:11:38.956    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:38.956    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:38.957    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']'
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83150
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:11:38.957  Process raid pid: 83150
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83150'
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83150
00:11:38.957   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83150 ']'
00:11:39.216   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:39.216   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:39.216  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:39.216   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:39.216   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:39.216   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:39.216  [2024-12-16 11:33:05.099612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:39.216  [2024-12-16 11:33:05.099732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:39.216  [2024-12-16 11:33:05.261372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:39.475  [2024-12-16 11:33:05.313227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:39.475  [2024-12-16 11:33:05.356924] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:39.475  [2024-12-16 11:33:05.356967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.044  [2024-12-16 11:33:05.963265] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:40.044  [2024-12-16 11:33:05.963313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:40.044  [2024-12-16 11:33:05.963341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:40.044  [2024-12-16 11:33:05.963352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:40.044  [2024-12-16 11:33:05.963359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:40.044  [2024-12-16 11:33:05.963371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:40.044  [2024-12-16 11:33:05.963378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:40.044  [2024-12-16 11:33:05.963388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:40.044   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:40.045   11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:40.045    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:40.045    11:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:40.045    11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.045    11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.045    11:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.045   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:40.045    "name": "Existed_Raid",
00:11:40.045    "uuid": "e6b02d99-d59c-4528-8f15-78f43e71d6e9",
00:11:40.045    "strip_size_kb": 64,
00:11:40.045    "state": "configuring",
00:11:40.045    "raid_level": "concat",
00:11:40.045    "superblock": true,
00:11:40.045    "num_base_bdevs": 4,
00:11:40.045    "num_base_bdevs_discovered": 0,
00:11:40.045    "num_base_bdevs_operational": 4,
00:11:40.045    "base_bdevs_list": [
00:11:40.045      {
00:11:40.045        "name": "BaseBdev1",
00:11:40.045        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:40.045        "is_configured": false,
00:11:40.045        "data_offset": 0,
00:11:40.045        "data_size": 0
00:11:40.045      },
00:11:40.045      {
00:11:40.045        "name": "BaseBdev2",
00:11:40.045        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:40.045        "is_configured": false,
00:11:40.045        "data_offset": 0,
00:11:40.045        "data_size": 0
00:11:40.045      },
00:11:40.045      {
00:11:40.045        "name": "BaseBdev3",
00:11:40.045        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:40.045        "is_configured": false,
00:11:40.045        "data_offset": 0,
00:11:40.045        "data_size": 0
00:11:40.045      },
00:11:40.045      {
00:11:40.045        "name": "BaseBdev4",
00:11:40.045        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:40.045        "is_configured": false,
00:11:40.045        "data_offset": 0,
00:11:40.045        "data_size": 0
00:11:40.045      }
00:11:40.045    ]
00:11:40.045  }'
00:11:40.045   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:40.045   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.614  [2024-12-16 11:33:06.434355] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:40.614  [2024-12-16 11:33:06.434407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.614  [2024-12-16 11:33:06.446365] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:40.614  [2024-12-16 11:33:06.446407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:40.614  [2024-12-16 11:33:06.446432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:40.614  [2024-12-16 11:33:06.446443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:40.614  [2024-12-16 11:33:06.446450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:40.614  [2024-12-16 11:33:06.446465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:40.614  [2024-12-16 11:33:06.446472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:40.614  [2024-12-16 11:33:06.446482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.614  [2024-12-16 11:33:06.467485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:40.614  BaseBdev1
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.614  [
00:11:40.614  {
00:11:40.614  "name": "BaseBdev1",
00:11:40.614  "aliases": [
00:11:40.614  "ae9ede97-2ed2-4c88-8a50-d66196bff0a1"
00:11:40.614  ],
00:11:40.614  "product_name": "Malloc disk",
00:11:40.614  "block_size": 512,
00:11:40.614  "num_blocks": 65536,
00:11:40.614  "uuid": "ae9ede97-2ed2-4c88-8a50-d66196bff0a1",
00:11:40.614  "assigned_rate_limits": {
00:11:40.614  "rw_ios_per_sec": 0,
00:11:40.614  "rw_mbytes_per_sec": 0,
00:11:40.614  "r_mbytes_per_sec": 0,
00:11:40.614  "w_mbytes_per_sec": 0
00:11:40.614  },
00:11:40.614  "claimed": true,
00:11:40.614  "claim_type": "exclusive_write",
00:11:40.614  "zoned": false,
00:11:40.614  "supported_io_types": {
00:11:40.614  "read": true,
00:11:40.614  "write": true,
00:11:40.614  "unmap": true,
00:11:40.614  "flush": true,
00:11:40.614  "reset": true,
00:11:40.614  "nvme_admin": false,
00:11:40.614  "nvme_io": false,
00:11:40.614  "nvme_io_md": false,
00:11:40.614  "write_zeroes": true,
00:11:40.614  "zcopy": true,
00:11:40.614  "get_zone_info": false,
00:11:40.614  "zone_management": false,
00:11:40.614  "zone_append": false,
00:11:40.614  "compare": false,
00:11:40.614  "compare_and_write": false,
00:11:40.614  "abort": true,
00:11:40.614  "seek_hole": false,
00:11:40.614  "seek_data": false,
00:11:40.614  "copy": true,
00:11:40.614  "nvme_iov_md": false
00:11:40.614  },
00:11:40.614  "memory_domains": [
00:11:40.614  {
00:11:40.614  "dma_device_id": "system",
00:11:40.614  "dma_device_type": 1
00:11:40.614  },
00:11:40.614  {
00:11:40.614  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:40.614  "dma_device_type": 2
00:11:40.614  }
00:11:40.614  ],
00:11:40.614  "driver_specific": {}
00:11:40.614  }
00:11:40.614  ]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:40.614    11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:40.614    11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:40.614    11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:40.614    11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:40.614    11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:40.614    "name": "Existed_Raid",
00:11:40.614    "uuid": "c0c78ed3-5a19-4573-bb68-ed9985b30128",
00:11:40.614    "strip_size_kb": 64,
00:11:40.614    "state": "configuring",
00:11:40.614    "raid_level": "concat",
00:11:40.614    "superblock": true,
00:11:40.614    "num_base_bdevs": 4,
00:11:40.614    "num_base_bdevs_discovered": 1,
00:11:40.614    "num_base_bdevs_operational": 4,
00:11:40.614    "base_bdevs_list": [
00:11:40.614      {
00:11:40.614        "name": "BaseBdev1",
00:11:40.614        "uuid": "ae9ede97-2ed2-4c88-8a50-d66196bff0a1",
00:11:40.614        "is_configured": true,
00:11:40.614        "data_offset": 2048,
00:11:40.614        "data_size": 63488
00:11:40.614      },
00:11:40.614      {
00:11:40.614        "name": "BaseBdev2",
00:11:40.614        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:40.614        "is_configured": false,
00:11:40.614        "data_offset": 0,
00:11:40.614        "data_size": 0
00:11:40.614      },
00:11:40.614      {
00:11:40.614        "name": "BaseBdev3",
00:11:40.614        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:40.614        "is_configured": false,
00:11:40.614        "data_offset": 0,
00:11:40.614        "data_size": 0
00:11:40.614      },
00:11:40.614      {
00:11:40.614        "name": "BaseBdev4",
00:11:40.614        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:40.614        "is_configured": false,
00:11:40.614        "data_offset": 0,
00:11:40.614        "data_size": 0
00:11:40.614      }
00:11:40.614    ]
00:11:40.614  }'
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:40.614   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.183  [2024-12-16 11:33:06.978721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:41.183  [2024-12-16 11:33:06.978786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.183  [2024-12-16 11:33:06.990765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:41.183  [2024-12-16 11:33:06.992798] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:11:41.183  [2024-12-16 11:33:06.992838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:11:41.183  [2024-12-16 11:33:06.992848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:11:41.183  [2024-12-16 11:33:06.992857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:11:41.183  [2024-12-16 11:33:06.992863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:11:41.183  [2024-12-16 11:33:06.992872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:11:41.183   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:41.184   11:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:41.184    11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:41.184    11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:41.184    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:41.184    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.184    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:41.184   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:41.184    "name": "Existed_Raid",
00:11:41.184    "uuid": "b103b1e5-0202-4972-8550-2117db29643d",
00:11:41.184    "strip_size_kb": 64,
00:11:41.184    "state": "configuring",
00:11:41.184    "raid_level": "concat",
00:11:41.184    "superblock": true,
00:11:41.184    "num_base_bdevs": 4,
00:11:41.184    "num_base_bdevs_discovered": 1,
00:11:41.184    "num_base_bdevs_operational": 4,
00:11:41.184    "base_bdevs_list": [
00:11:41.184      {
00:11:41.184        "name": "BaseBdev1",
00:11:41.184        "uuid": "ae9ede97-2ed2-4c88-8a50-d66196bff0a1",
00:11:41.184        "is_configured": true,
00:11:41.184        "data_offset": 2048,
00:11:41.184        "data_size": 63488
00:11:41.184      },
00:11:41.184      {
00:11:41.184        "name": "BaseBdev2",
00:11:41.184        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:41.184        "is_configured": false,
00:11:41.184        "data_offset": 0,
00:11:41.184        "data_size": 0
00:11:41.184      },
00:11:41.184      {
00:11:41.184        "name": "BaseBdev3",
00:11:41.184        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:41.184        "is_configured": false,
00:11:41.184        "data_offset": 0,
00:11:41.184        "data_size": 0
00:11:41.184      },
00:11:41.184      {
00:11:41.184        "name": "BaseBdev4",
00:11:41.184        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:41.184        "is_configured": false,
00:11:41.184        "data_offset": 0,
00:11:41.184        "data_size": 0
00:11:41.184      }
00:11:41.184    ]
00:11:41.184  }'
00:11:41.184   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:41.184   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.443  [2024-12-16 11:33:07.430105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:41.443  BaseBdev2
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.443  [
00:11:41.443  {
00:11:41.443  "name": "BaseBdev2",
00:11:41.443  "aliases": [
00:11:41.443  "e960f7a2-b632-4d9a-9193-7b0325b08f41"
00:11:41.443  ],
00:11:41.443  "product_name": "Malloc disk",
00:11:41.443  "block_size": 512,
00:11:41.443  "num_blocks": 65536,
00:11:41.443  "uuid": "e960f7a2-b632-4d9a-9193-7b0325b08f41",
00:11:41.443  "assigned_rate_limits": {
00:11:41.443  "rw_ios_per_sec": 0,
00:11:41.443  "rw_mbytes_per_sec": 0,
00:11:41.443  "r_mbytes_per_sec": 0,
00:11:41.443  "w_mbytes_per_sec": 0
00:11:41.443  },
00:11:41.443  "claimed": true,
00:11:41.443  "claim_type": "exclusive_write",
00:11:41.443  "zoned": false,
00:11:41.443  "supported_io_types": {
00:11:41.443  "read": true,
00:11:41.443  "write": true,
00:11:41.443  "unmap": true,
00:11:41.443  "flush": true,
00:11:41.443  "reset": true,
00:11:41.443  "nvme_admin": false,
00:11:41.443  "nvme_io": false,
00:11:41.443  "nvme_io_md": false,
00:11:41.443  "write_zeroes": true,
00:11:41.443  "zcopy": true,
00:11:41.443  "get_zone_info": false,
00:11:41.443  "zone_management": false,
00:11:41.443  "zone_append": false,
00:11:41.443  "compare": false,
00:11:41.443  "compare_and_write": false,
00:11:41.443  "abort": true,
00:11:41.443  "seek_hole": false,
00:11:41.443  "seek_data": false,
00:11:41.443  "copy": true,
00:11:41.443  "nvme_iov_md": false
00:11:41.443  },
00:11:41.443  "memory_domains": [
00:11:41.443  {
00:11:41.443  "dma_device_id": "system",
00:11:41.443  "dma_device_type": 1
00:11:41.443  },
00:11:41.443  {
00:11:41.443  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:41.443  "dma_device_type": 2
00:11:41.443  }
00:11:41.443  ],
00:11:41.443  "driver_specific": {}
00:11:41.443  }
00:11:41.443  ]
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:41.443   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:41.443    11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:41.443    11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:41.443    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:41.443    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:41.443    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:41.741   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:41.741    "name": "Existed_Raid",
00:11:41.741    "uuid": "b103b1e5-0202-4972-8550-2117db29643d",
00:11:41.741    "strip_size_kb": 64,
00:11:41.741    "state": "configuring",
00:11:41.741    "raid_level": "concat",
00:11:41.741    "superblock": true,
00:11:41.741    "num_base_bdevs": 4,
00:11:41.741    "num_base_bdevs_discovered": 2,
00:11:41.741    "num_base_bdevs_operational": 4,
00:11:41.741    "base_bdevs_list": [
00:11:41.741      {
00:11:41.741        "name": "BaseBdev1",
00:11:41.741        "uuid": "ae9ede97-2ed2-4c88-8a50-d66196bff0a1",
00:11:41.741        "is_configured": true,
00:11:41.741        "data_offset": 2048,
00:11:41.741        "data_size": 63488
00:11:41.741      },
00:11:41.742      {
00:11:41.742        "name": "BaseBdev2",
00:11:41.742        "uuid": "e960f7a2-b632-4d9a-9193-7b0325b08f41",
00:11:41.742        "is_configured": true,
00:11:41.742        "data_offset": 2048,
00:11:41.742        "data_size": 63488
00:11:41.742      },
00:11:41.742      {
00:11:41.742        "name": "BaseBdev3",
00:11:41.742        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:41.742        "is_configured": false,
00:11:41.742        "data_offset": 0,
00:11:41.742        "data_size": 0
00:11:41.742      },
00:11:41.742      {
00:11:41.742        "name": "BaseBdev4",
00:11:41.742        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:41.742        "is_configured": false,
00:11:41.742        "data_offset": 0,
00:11:41.742        "data_size": 0
00:11:41.742      }
00:11:41.742    ]
00:11:41.742  }'
00:11:41.742   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:41.742   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.001  [2024-12-16 11:33:07.876861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:42.001  BaseBdev3
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.001   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.001  [
00:11:42.001  {
00:11:42.001  "name": "BaseBdev3",
00:11:42.001  "aliases": [
00:11:42.001  "2d348c8a-b495-433a-9a53-d1b3217af638"
00:11:42.001  ],
00:11:42.001  "product_name": "Malloc disk",
00:11:42.001  "block_size": 512,
00:11:42.001  "num_blocks": 65536,
00:11:42.001  "uuid": "2d348c8a-b495-433a-9a53-d1b3217af638",
00:11:42.001  "assigned_rate_limits": {
00:11:42.001  "rw_ios_per_sec": 0,
00:11:42.001  "rw_mbytes_per_sec": 0,
00:11:42.001  "r_mbytes_per_sec": 0,
00:11:42.001  "w_mbytes_per_sec": 0
00:11:42.001  },
00:11:42.001  "claimed": true,
00:11:42.001  "claim_type": "exclusive_write",
00:11:42.001  "zoned": false,
00:11:42.001  "supported_io_types": {
00:11:42.001  "read": true,
00:11:42.001  "write": true,
00:11:42.001  "unmap": true,
00:11:42.001  "flush": true,
00:11:42.001  "reset": true,
00:11:42.001  "nvme_admin": false,
00:11:42.001  "nvme_io": false,
00:11:42.001  "nvme_io_md": false,
00:11:42.001  "write_zeroes": true,
00:11:42.001  "zcopy": true,
00:11:42.001  "get_zone_info": false,
00:11:42.001  "zone_management": false,
00:11:42.001  "zone_append": false,
00:11:42.001  "compare": false,
00:11:42.001  "compare_and_write": false,
00:11:42.001  "abort": true,
00:11:42.001  "seek_hole": false,
00:11:42.001  "seek_data": false,
00:11:42.001  "copy": true,
00:11:42.001  "nvme_iov_md": false
00:11:42.001  },
00:11:42.001  "memory_domains": [
00:11:42.001  {
00:11:42.001  "dma_device_id": "system",
00:11:42.001  "dma_device_type": 1
00:11:42.001  },
00:11:42.001  {
00:11:42.001  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:42.001  "dma_device_type": 2
00:11:42.001  }
00:11:42.001  ],
00:11:42.001  "driver_specific": {}
00:11:42.001  }
00:11:42.001  ]
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:42.002    11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:42.002    11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:42.002    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.002    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.002    11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:42.002    "name": "Existed_Raid",
00:11:42.002    "uuid": "b103b1e5-0202-4972-8550-2117db29643d",
00:11:42.002    "strip_size_kb": 64,
00:11:42.002    "state": "configuring",
00:11:42.002    "raid_level": "concat",
00:11:42.002    "superblock": true,
00:11:42.002    "num_base_bdevs": 4,
00:11:42.002    "num_base_bdevs_discovered": 3,
00:11:42.002    "num_base_bdevs_operational": 4,
00:11:42.002    "base_bdevs_list": [
00:11:42.002      {
00:11:42.002        "name": "BaseBdev1",
00:11:42.002        "uuid": "ae9ede97-2ed2-4c88-8a50-d66196bff0a1",
00:11:42.002        "is_configured": true,
00:11:42.002        "data_offset": 2048,
00:11:42.002        "data_size": 63488
00:11:42.002      },
00:11:42.002      {
00:11:42.002        "name": "BaseBdev2",
00:11:42.002        "uuid": "e960f7a2-b632-4d9a-9193-7b0325b08f41",
00:11:42.002        "is_configured": true,
00:11:42.002        "data_offset": 2048,
00:11:42.002        "data_size": 63488
00:11:42.002      },
00:11:42.002      {
00:11:42.002        "name": "BaseBdev3",
00:11:42.002        "uuid": "2d348c8a-b495-433a-9a53-d1b3217af638",
00:11:42.002        "is_configured": true,
00:11:42.002        "data_offset": 2048,
00:11:42.002        "data_size": 63488
00:11:42.002      },
00:11:42.002      {
00:11:42.002        "name": "BaseBdev4",
00:11:42.002        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:42.002        "is_configured": false,
00:11:42.002        "data_offset": 0,
00:11:42.002        "data_size": 0
00:11:42.002      }
00:11:42.002    ]
00:11:42.002  }'
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:42.002   11:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.571  [2024-12-16 11:33:08.399343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:42.571  [2024-12-16 11:33:08.399589] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:11:42.571  [2024-12-16 11:33:08.399607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:42.571  [2024-12-16 11:33:08.399963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:42.571  BaseBdev4
00:11:42.571  [2024-12-16 11:33:08.400122] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:11:42.571  [2024-12-16 11:33:08.400137] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:11:42.571  [2024-12-16 11:33:08.400272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.571  [
00:11:42.571  {
00:11:42.571  "name": "BaseBdev4",
00:11:42.571  "aliases": [
00:11:42.571  "077d1b57-2371-43aa-9fbe-ee7e11374371"
00:11:42.571  ],
00:11:42.571  "product_name": "Malloc disk",
00:11:42.571  "block_size": 512,
00:11:42.571  "num_blocks": 65536,
00:11:42.571  "uuid": "077d1b57-2371-43aa-9fbe-ee7e11374371",
00:11:42.571  "assigned_rate_limits": {
00:11:42.571  "rw_ios_per_sec": 0,
00:11:42.571  "rw_mbytes_per_sec": 0,
00:11:42.571  "r_mbytes_per_sec": 0,
00:11:42.571  "w_mbytes_per_sec": 0
00:11:42.571  },
00:11:42.571  "claimed": true,
00:11:42.571  "claim_type": "exclusive_write",
00:11:42.571  "zoned": false,
00:11:42.571  "supported_io_types": {
00:11:42.571  "read": true,
00:11:42.571  "write": true,
00:11:42.571  "unmap": true,
00:11:42.571  "flush": true,
00:11:42.571  "reset": true,
00:11:42.571  "nvme_admin": false,
00:11:42.571  "nvme_io": false,
00:11:42.571  "nvme_io_md": false,
00:11:42.571  "write_zeroes": true,
00:11:42.571  "zcopy": true,
00:11:42.571  "get_zone_info": false,
00:11:42.571  "zone_management": false,
00:11:42.571  "zone_append": false,
00:11:42.571  "compare": false,
00:11:42.571  "compare_and_write": false,
00:11:42.571  "abort": true,
00:11:42.571  "seek_hole": false,
00:11:42.571  "seek_data": false,
00:11:42.571  "copy": true,
00:11:42.571  "nvme_iov_md": false
00:11:42.571  },
00:11:42.571  "memory_domains": [
00:11:42.571  {
00:11:42.571  "dma_device_id": "system",
00:11:42.571  "dma_device_type": 1
00:11:42.571  },
00:11:42.571  {
00:11:42.571  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:42.571  "dma_device_type": 2
00:11:42.571  }
00:11:42.571  ],
00:11:42.571  "driver_specific": {}
00:11:42.571  }
00:11:42.571  ]
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:42.571    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:42.571    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:42.571    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.571    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.571    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:42.571    "name": "Existed_Raid",
00:11:42.571    "uuid": "b103b1e5-0202-4972-8550-2117db29643d",
00:11:42.571    "strip_size_kb": 64,
00:11:42.571    "state": "online",
00:11:42.571    "raid_level": "concat",
00:11:42.571    "superblock": true,
00:11:42.571    "num_base_bdevs": 4,
00:11:42.571    "num_base_bdevs_discovered": 4,
00:11:42.571    "num_base_bdevs_operational": 4,
00:11:42.571    "base_bdevs_list": [
00:11:42.571      {
00:11:42.571        "name": "BaseBdev1",
00:11:42.571        "uuid": "ae9ede97-2ed2-4c88-8a50-d66196bff0a1",
00:11:42.571        "is_configured": true,
00:11:42.571        "data_offset": 2048,
00:11:42.571        "data_size": 63488
00:11:42.571      },
00:11:42.571      {
00:11:42.571        "name": "BaseBdev2",
00:11:42.571        "uuid": "e960f7a2-b632-4d9a-9193-7b0325b08f41",
00:11:42.571        "is_configured": true,
00:11:42.571        "data_offset": 2048,
00:11:42.571        "data_size": 63488
00:11:42.571      },
00:11:42.571      {
00:11:42.571        "name": "BaseBdev3",
00:11:42.571        "uuid": "2d348c8a-b495-433a-9a53-d1b3217af638",
00:11:42.571        "is_configured": true,
00:11:42.571        "data_offset": 2048,
00:11:42.571        "data_size": 63488
00:11:42.571      },
00:11:42.571      {
00:11:42.571        "name": "BaseBdev4",
00:11:42.571        "uuid": "077d1b57-2371-43aa-9fbe-ee7e11374371",
00:11:42.571        "is_configured": true,
00:11:42.571        "data_offset": 2048,
00:11:42.571        "data_size": 63488
00:11:42.571      }
00:11:42.571    ]
00:11:42.571  }'
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:42.571   11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.830   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:11:42.830   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:42.830   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:42.830   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:42.830   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:11:42.831   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:42.831    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:42.831    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:42.831    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:42.831    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:42.831  [2024-12-16 11:33:08.835257] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:42.831    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:42.831   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:42.831    "name": "Existed_Raid",
00:11:42.831    "aliases": [
00:11:42.831      "b103b1e5-0202-4972-8550-2117db29643d"
00:11:42.831    ],
00:11:42.831    "product_name": "Raid Volume",
00:11:42.831    "block_size": 512,
00:11:42.831    "num_blocks": 253952,
00:11:42.831    "uuid": "b103b1e5-0202-4972-8550-2117db29643d",
00:11:42.831    "assigned_rate_limits": {
00:11:42.831      "rw_ios_per_sec": 0,
00:11:42.831      "rw_mbytes_per_sec": 0,
00:11:42.831      "r_mbytes_per_sec": 0,
00:11:42.831      "w_mbytes_per_sec": 0
00:11:42.831    },
00:11:42.831    "claimed": false,
00:11:42.831    "zoned": false,
00:11:42.831    "supported_io_types": {
00:11:42.831      "read": true,
00:11:42.831      "write": true,
00:11:42.831      "unmap": true,
00:11:42.831      "flush": true,
00:11:42.831      "reset": true,
00:11:42.831      "nvme_admin": false,
00:11:42.831      "nvme_io": false,
00:11:42.831      "nvme_io_md": false,
00:11:42.831      "write_zeroes": true,
00:11:42.831      "zcopy": false,
00:11:42.831      "get_zone_info": false,
00:11:42.831      "zone_management": false,
00:11:42.831      "zone_append": false,
00:11:42.831      "compare": false,
00:11:42.831      "compare_and_write": false,
00:11:42.831      "abort": false,
00:11:42.831      "seek_hole": false,
00:11:42.831      "seek_data": false,
00:11:42.831      "copy": false,
00:11:42.831      "nvme_iov_md": false
00:11:42.831    },
00:11:42.831    "memory_domains": [
00:11:42.831      {
00:11:42.831        "dma_device_id": "system",
00:11:42.831        "dma_device_type": 1
00:11:42.831      },
00:11:42.831      {
00:11:42.831        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:42.831        "dma_device_type": 2
00:11:42.831      },
00:11:42.831      {
00:11:42.831        "dma_device_id": "system",
00:11:42.831        "dma_device_type": 1
00:11:42.831      },
00:11:42.831      {
00:11:42.831        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:42.831        "dma_device_type": 2
00:11:42.831      },
00:11:42.831      {
00:11:42.831        "dma_device_id": "system",
00:11:42.831        "dma_device_type": 1
00:11:42.831      },
00:11:42.831      {
00:11:42.831        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:42.831        "dma_device_type": 2
00:11:42.831      },
00:11:42.831      {
00:11:42.831        "dma_device_id": "system",
00:11:42.831        "dma_device_type": 1
00:11:42.831      },
00:11:42.831      {
00:11:42.831        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:42.831        "dma_device_type": 2
00:11:42.831      }
00:11:42.831    ],
00:11:42.831    "driver_specific": {
00:11:42.831      "raid": {
00:11:42.831        "uuid": "b103b1e5-0202-4972-8550-2117db29643d",
00:11:42.831        "strip_size_kb": 64,
00:11:42.831        "state": "online",
00:11:42.831        "raid_level": "concat",
00:11:42.831        "superblock": true,
00:11:42.831        "num_base_bdevs": 4,
00:11:42.831        "num_base_bdevs_discovered": 4,
00:11:42.831        "num_base_bdevs_operational": 4,
00:11:42.831        "base_bdevs_list": [
00:11:42.831          {
00:11:42.831            "name": "BaseBdev1",
00:11:42.831            "uuid": "ae9ede97-2ed2-4c88-8a50-d66196bff0a1",
00:11:42.831            "is_configured": true,
00:11:42.831            "data_offset": 2048,
00:11:42.831            "data_size": 63488
00:11:42.831          },
00:11:42.831          {
00:11:42.831            "name": "BaseBdev2",
00:11:42.831            "uuid": "e960f7a2-b632-4d9a-9193-7b0325b08f41",
00:11:42.831            "is_configured": true,
00:11:42.831            "data_offset": 2048,
00:11:42.831            "data_size": 63488
00:11:42.831          },
00:11:42.831          {
00:11:42.831            "name": "BaseBdev3",
00:11:42.831            "uuid": "2d348c8a-b495-433a-9a53-d1b3217af638",
00:11:42.831            "is_configured": true,
00:11:42.831            "data_offset": 2048,
00:11:42.831            "data_size": 63488
00:11:42.831          },
00:11:42.831          {
00:11:42.831            "name": "BaseBdev4",
00:11:42.831            "uuid": "077d1b57-2371-43aa-9fbe-ee7e11374371",
00:11:42.831            "is_configured": true,
00:11:42.831            "data_offset": 2048,
00:11:42.831            "data_size": 63488
00:11:42.831          }
00:11:42.831        ]
00:11:42.831      }
00:11:42.831    }
00:11:42.831  }'
00:11:42.831    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:43.090   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:11:43.090  BaseBdev2
00:11:43.090  BaseBdev3
00:11:43.090  BaseBdev4'
00:11:43.090    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:43.090   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:43.091   11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:43.091    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:43.091    11:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:11:43.091    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.091    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.091    11:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.091    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.091   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.091  [2024-12-16 11:33:09.146371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:43.091  [2024-12-16 11:33:09.146405] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:43.091  [2024-12-16 11:33:09.146469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:43.350   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:43.351    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:43.351    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.351    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.351    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:43.351    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:43.351    "name": "Existed_Raid",
00:11:43.351    "uuid": "b103b1e5-0202-4972-8550-2117db29643d",
00:11:43.351    "strip_size_kb": 64,
00:11:43.351    "state": "offline",
00:11:43.351    "raid_level": "concat",
00:11:43.351    "superblock": true,
00:11:43.351    "num_base_bdevs": 4,
00:11:43.351    "num_base_bdevs_discovered": 3,
00:11:43.351    "num_base_bdevs_operational": 3,
00:11:43.351    "base_bdevs_list": [
00:11:43.351      {
00:11:43.351        "name": null,
00:11:43.351        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:43.351        "is_configured": false,
00:11:43.351        "data_offset": 0,
00:11:43.351        "data_size": 63488
00:11:43.351      },
00:11:43.351      {
00:11:43.351        "name": "BaseBdev2",
00:11:43.351        "uuid": "e960f7a2-b632-4d9a-9193-7b0325b08f41",
00:11:43.351        "is_configured": true,
00:11:43.351        "data_offset": 2048,
00:11:43.351        "data_size": 63488
00:11:43.351      },
00:11:43.351      {
00:11:43.351        "name": "BaseBdev3",
00:11:43.351        "uuid": "2d348c8a-b495-433a-9a53-d1b3217af638",
00:11:43.351        "is_configured": true,
00:11:43.351        "data_offset": 2048,
00:11:43.351        "data_size": 63488
00:11:43.351      },
00:11:43.351      {
00:11:43.351        "name": "BaseBdev4",
00:11:43.351        "uuid": "077d1b57-2371-43aa-9fbe-ee7e11374371",
00:11:43.351        "is_configured": true,
00:11:43.351        "data_offset": 2048,
00:11:43.351        "data_size": 63488
00:11:43.351      }
00:11:43.351    ]
00:11:43.351  }'
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:43.351   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.610   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:11:43.610   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:43.610    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:43.610    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.610    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.610    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:43.610    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.869   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870  [2024-12-16 11:33:09.689198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870  [2024-12-16 11:33:09.756695] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870  [2024-12-16 11:33:09.828256] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:11:43.870  [2024-12-16 11:33:09.828367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870    11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870  BaseBdev2
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:43.870   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.130  [
00:11:44.130  {
00:11:44.130  "name": "BaseBdev2",
00:11:44.130  "aliases": [
00:11:44.130  "cd364b18-be3c-4508-8104-6f86596fd68e"
00:11:44.130  ],
00:11:44.130  "product_name": "Malloc disk",
00:11:44.130  "block_size": 512,
00:11:44.130  "num_blocks": 65536,
00:11:44.130  "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:44.130  "assigned_rate_limits": {
00:11:44.130  "rw_ios_per_sec": 0,
00:11:44.130  "rw_mbytes_per_sec": 0,
00:11:44.130  "r_mbytes_per_sec": 0,
00:11:44.130  "w_mbytes_per_sec": 0
00:11:44.130  },
00:11:44.130  "claimed": false,
00:11:44.130  "zoned": false,
00:11:44.130  "supported_io_types": {
00:11:44.130  "read": true,
00:11:44.130  "write": true,
00:11:44.130  "unmap": true,
00:11:44.130  "flush": true,
00:11:44.130  "reset": true,
00:11:44.130  "nvme_admin": false,
00:11:44.130  "nvme_io": false,
00:11:44.130  "nvme_io_md": false,
00:11:44.130  "write_zeroes": true,
00:11:44.130  "zcopy": true,
00:11:44.130  "get_zone_info": false,
00:11:44.130  "zone_management": false,
00:11:44.130  "zone_append": false,
00:11:44.130  "compare": false,
00:11:44.130  "compare_and_write": false,
00:11:44.130  "abort": true,
00:11:44.130  "seek_hole": false,
00:11:44.130  "seek_data": false,
00:11:44.130  "copy": true,
00:11:44.130  "nvme_iov_md": false
00:11:44.130  },
00:11:44.130  "memory_domains": [
00:11:44.130  {
00:11:44.130  "dma_device_id": "system",
00:11:44.130  "dma_device_type": 1
00:11:44.130  },
00:11:44.130  {
00:11:44.130  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:44.130  "dma_device_type": 2
00:11:44.130  }
00:11:44.130  ],
00:11:44.130  "driver_specific": {}
00:11:44.130  }
00:11:44.130  ]
00:11:44.130   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.130   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:44.130   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:44.130   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131  BaseBdev3
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131   11:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131  [
00:11:44.131  {
00:11:44.131  "name": "BaseBdev3",
00:11:44.131  "aliases": [
00:11:44.131  "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac"
00:11:44.131  ],
00:11:44.131  "product_name": "Malloc disk",
00:11:44.131  "block_size": 512,
00:11:44.131  "num_blocks": 65536,
00:11:44.131  "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:44.131  "assigned_rate_limits": {
00:11:44.131  "rw_ios_per_sec": 0,
00:11:44.131  "rw_mbytes_per_sec": 0,
00:11:44.131  "r_mbytes_per_sec": 0,
00:11:44.131  "w_mbytes_per_sec": 0
00:11:44.131  },
00:11:44.131  "claimed": false,
00:11:44.131  "zoned": false,
00:11:44.131  "supported_io_types": {
00:11:44.131  "read": true,
00:11:44.131  "write": true,
00:11:44.131  "unmap": true,
00:11:44.131  "flush": true,
00:11:44.131  "reset": true,
00:11:44.131  "nvme_admin": false,
00:11:44.131  "nvme_io": false,
00:11:44.131  "nvme_io_md": false,
00:11:44.131  "write_zeroes": true,
00:11:44.131  "zcopy": true,
00:11:44.131  "get_zone_info": false,
00:11:44.131  "zone_management": false,
00:11:44.131  "zone_append": false,
00:11:44.131  "compare": false,
00:11:44.131  "compare_and_write": false,
00:11:44.131  "abort": true,
00:11:44.131  "seek_hole": false,
00:11:44.131  "seek_data": false,
00:11:44.131  "copy": true,
00:11:44.131  "nvme_iov_md": false
00:11:44.131  },
00:11:44.131  "memory_domains": [
00:11:44.131  {
00:11:44.131  "dma_device_id": "system",
00:11:44.131  "dma_device_type": 1
00:11:44.131  },
00:11:44.131  {
00:11:44.131  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:44.131  "dma_device_type": 2
00:11:44.131  }
00:11:44.131  ],
00:11:44.131  "driver_specific": {}
00:11:44.131  }
00:11:44.131  ]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131  BaseBdev4
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131  [
00:11:44.131  {
00:11:44.131  "name": "BaseBdev4",
00:11:44.131  "aliases": [
00:11:44.131  "af71d848-9ec3-42ad-b277-153b1e873285"
00:11:44.131  ],
00:11:44.131  "product_name": "Malloc disk",
00:11:44.131  "block_size": 512,
00:11:44.131  "num_blocks": 65536,
00:11:44.131  "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:44.131  "assigned_rate_limits": {
00:11:44.131  "rw_ios_per_sec": 0,
00:11:44.131  "rw_mbytes_per_sec": 0,
00:11:44.131  "r_mbytes_per_sec": 0,
00:11:44.131  "w_mbytes_per_sec": 0
00:11:44.131  },
00:11:44.131  "claimed": false,
00:11:44.131  "zoned": false,
00:11:44.131  "supported_io_types": {
00:11:44.131  "read": true,
00:11:44.131  "write": true,
00:11:44.131  "unmap": true,
00:11:44.131  "flush": true,
00:11:44.131  "reset": true,
00:11:44.131  "nvme_admin": false,
00:11:44.131  "nvme_io": false,
00:11:44.131  "nvme_io_md": false,
00:11:44.131  "write_zeroes": true,
00:11:44.131  "zcopy": true,
00:11:44.131  "get_zone_info": false,
00:11:44.131  "zone_management": false,
00:11:44.131  "zone_append": false,
00:11:44.131  "compare": false,
00:11:44.131  "compare_and_write": false,
00:11:44.131  "abort": true,
00:11:44.131  "seek_hole": false,
00:11:44.131  "seek_data": false,
00:11:44.131  "copy": true,
00:11:44.131  "nvme_iov_md": false
00:11:44.131  },
00:11:44.131  "memory_domains": [
00:11:44.131  {
00:11:44.131  "dma_device_id": "system",
00:11:44.131  "dma_device_type": 1
00:11:44.131  },
00:11:44.131  {
00:11:44.131  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:44.131  "dma_device_type": 2
00:11:44.131  }
00:11:44.131  ],
00:11:44.131  "driver_specific": {}
00:11:44.131  }
00:11:44.131  ]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131  [2024-12-16 11:33:10.075069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:11:44.131  [2024-12-16 11:33:10.075165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:11:44.131  [2024-12-16 11:33:10.075240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:44.131  [2024-12-16 11:33:10.077342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:44.131  [2024-12-16 11:33:10.077450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:44.131    11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:44.131    11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:44.131    11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.131    11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.131    11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.131   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:44.131    "name": "Existed_Raid",
00:11:44.131    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:44.131    "strip_size_kb": 64,
00:11:44.131    "state": "configuring",
00:11:44.131    "raid_level": "concat",
00:11:44.131    "superblock": true,
00:11:44.131    "num_base_bdevs": 4,
00:11:44.131    "num_base_bdevs_discovered": 3,
00:11:44.131    "num_base_bdevs_operational": 4,
00:11:44.131    "base_bdevs_list": [
00:11:44.131      {
00:11:44.131        "name": "BaseBdev1",
00:11:44.131        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:44.131        "is_configured": false,
00:11:44.131        "data_offset": 0,
00:11:44.132        "data_size": 0
00:11:44.132      },
00:11:44.132      {
00:11:44.132        "name": "BaseBdev2",
00:11:44.132        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:44.132        "is_configured": true,
00:11:44.132        "data_offset": 2048,
00:11:44.132        "data_size": 63488
00:11:44.132      },
00:11:44.132      {
00:11:44.132        "name": "BaseBdev3",
00:11:44.132        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:44.132        "is_configured": true,
00:11:44.132        "data_offset": 2048,
00:11:44.132        "data_size": 63488
00:11:44.132      },
00:11:44.132      {
00:11:44.132        "name": "BaseBdev4",
00:11:44.132        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:44.132        "is_configured": true,
00:11:44.132        "data_offset": 2048,
00:11:44.132        "data_size": 63488
00:11:44.132      }
00:11:44.132    ]
00:11:44.132  }'
00:11:44.132   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:44.132   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.699  [2024-12-16 11:33:10.538285] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:44.699    11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:44.699    11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:44.699    11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.699    11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.699    11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:44.699    "name": "Existed_Raid",
00:11:44.699    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:44.699    "strip_size_kb": 64,
00:11:44.699    "state": "configuring",
00:11:44.699    "raid_level": "concat",
00:11:44.699    "superblock": true,
00:11:44.699    "num_base_bdevs": 4,
00:11:44.699    "num_base_bdevs_discovered": 2,
00:11:44.699    "num_base_bdevs_operational": 4,
00:11:44.699    "base_bdevs_list": [
00:11:44.699      {
00:11:44.699        "name": "BaseBdev1",
00:11:44.699        "uuid": "00000000-0000-0000-0000-000000000000",
00:11:44.699        "is_configured": false,
00:11:44.699        "data_offset": 0,
00:11:44.699        "data_size": 0
00:11:44.699      },
00:11:44.699      {
00:11:44.699        "name": null,
00:11:44.699        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:44.699        "is_configured": false,
00:11:44.699        "data_offset": 0,
00:11:44.699        "data_size": 63488
00:11:44.699      },
00:11:44.699      {
00:11:44.699        "name": "BaseBdev3",
00:11:44.699        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:44.699        "is_configured": true,
00:11:44.699        "data_offset": 2048,
00:11:44.699        "data_size": 63488
00:11:44.699      },
00:11:44.699      {
00:11:44.699        "name": "BaseBdev4",
00:11:44.699        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:44.699        "is_configured": true,
00:11:44.699        "data_offset": 2048,
00:11:44.699        "data_size": 63488
00:11:44.699      }
00:11:44.699    ]
00:11:44.699  }'
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:44.699   11:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.959    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:44.959    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:44.959    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:44.959    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:44.959    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.219  [2024-12-16 11:33:11.060458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:45.219  BaseBdev1
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.219  [
00:11:45.219  {
00:11:45.219  "name": "BaseBdev1",
00:11:45.219  "aliases": [
00:11:45.219  "53415298-2dfd-44fd-ab7e-b80bd5c46956"
00:11:45.219  ],
00:11:45.219  "product_name": "Malloc disk",
00:11:45.219  "block_size": 512,
00:11:45.219  "num_blocks": 65536,
00:11:45.219  "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:45.219  "assigned_rate_limits": {
00:11:45.219  "rw_ios_per_sec": 0,
00:11:45.219  "rw_mbytes_per_sec": 0,
00:11:45.219  "r_mbytes_per_sec": 0,
00:11:45.219  "w_mbytes_per_sec": 0
00:11:45.219  },
00:11:45.219  "claimed": true,
00:11:45.219  "claim_type": "exclusive_write",
00:11:45.219  "zoned": false,
00:11:45.219  "supported_io_types": {
00:11:45.219  "read": true,
00:11:45.219  "write": true,
00:11:45.219  "unmap": true,
00:11:45.219  "flush": true,
00:11:45.219  "reset": true,
00:11:45.219  "nvme_admin": false,
00:11:45.219  "nvme_io": false,
00:11:45.219  "nvme_io_md": false,
00:11:45.219  "write_zeroes": true,
00:11:45.219  "zcopy": true,
00:11:45.219  "get_zone_info": false,
00:11:45.219  "zone_management": false,
00:11:45.219  "zone_append": false,
00:11:45.219  "compare": false,
00:11:45.219  "compare_and_write": false,
00:11:45.219  "abort": true,
00:11:45.219  "seek_hole": false,
00:11:45.219  "seek_data": false,
00:11:45.219  "copy": true,
00:11:45.219  "nvme_iov_md": false
00:11:45.219  },
00:11:45.219  "memory_domains": [
00:11:45.219  {
00:11:45.219  "dma_device_id": "system",
00:11:45.219  "dma_device_type": 1
00:11:45.219  },
00:11:45.219  {
00:11:45.219  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:45.219  "dma_device_type": 2
00:11:45.219  }
00:11:45.219  ],
00:11:45.219  "driver_specific": {}
00:11:45.219  }
00:11:45.219  ]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:45.219    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:45.219    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:45.219    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:45.219    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.219    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:45.219    "name": "Existed_Raid",
00:11:45.219    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:45.219    "strip_size_kb": 64,
00:11:45.219    "state": "configuring",
00:11:45.219    "raid_level": "concat",
00:11:45.219    "superblock": true,
00:11:45.219    "num_base_bdevs": 4,
00:11:45.219    "num_base_bdevs_discovered": 3,
00:11:45.219    "num_base_bdevs_operational": 4,
00:11:45.219    "base_bdevs_list": [
00:11:45.219      {
00:11:45.219        "name": "BaseBdev1",
00:11:45.219        "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:45.219        "is_configured": true,
00:11:45.219        "data_offset": 2048,
00:11:45.219        "data_size": 63488
00:11:45.219      },
00:11:45.219      {
00:11:45.219        "name": null,
00:11:45.219        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:45.219        "is_configured": false,
00:11:45.219        "data_offset": 0,
00:11:45.219        "data_size": 63488
00:11:45.219      },
00:11:45.219      {
00:11:45.219        "name": "BaseBdev3",
00:11:45.219        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:45.219        "is_configured": true,
00:11:45.219        "data_offset": 2048,
00:11:45.219        "data_size": 63488
00:11:45.219      },
00:11:45.219      {
00:11:45.219        "name": "BaseBdev4",
00:11:45.219        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:45.219        "is_configured": true,
00:11:45.219        "data_offset": 2048,
00:11:45.219        "data_size": 63488
00:11:45.219      }
00:11:45.219    ]
00:11:45.219  }'
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:45.219   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.788  [2024-12-16 11:33:11.623578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:45.788    11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:45.788    "name": "Existed_Raid",
00:11:45.788    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:45.788    "strip_size_kb": 64,
00:11:45.788    "state": "configuring",
00:11:45.788    "raid_level": "concat",
00:11:45.788    "superblock": true,
00:11:45.788    "num_base_bdevs": 4,
00:11:45.788    "num_base_bdevs_discovered": 2,
00:11:45.788    "num_base_bdevs_operational": 4,
00:11:45.788    "base_bdevs_list": [
00:11:45.788      {
00:11:45.788        "name": "BaseBdev1",
00:11:45.788        "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:45.788        "is_configured": true,
00:11:45.788        "data_offset": 2048,
00:11:45.788        "data_size": 63488
00:11:45.788      },
00:11:45.788      {
00:11:45.788        "name": null,
00:11:45.788        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:45.788        "is_configured": false,
00:11:45.788        "data_offset": 0,
00:11:45.788        "data_size": 63488
00:11:45.788      },
00:11:45.788      {
00:11:45.788        "name": null,
00:11:45.788        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:45.788        "is_configured": false,
00:11:45.788        "data_offset": 0,
00:11:45.788        "data_size": 63488
00:11:45.788      },
00:11:45.788      {
00:11:45.788        "name": "BaseBdev4",
00:11:45.788        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:45.788        "is_configured": true,
00:11:45.788        "data_offset": 2048,
00:11:45.788        "data_size": 63488
00:11:45.788      }
00:11:45.788    ]
00:11:45.788  }'
00:11:45.788   11:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:45.789   11:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.048    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:46.048    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:46.048    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:46.048    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.308    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.308  [2024-12-16 11:33:12.138737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:46.308    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:46.308    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:46.308    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:46.308    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.308    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:46.308    "name": "Existed_Raid",
00:11:46.308    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:46.308    "strip_size_kb": 64,
00:11:46.308    "state": "configuring",
00:11:46.308    "raid_level": "concat",
00:11:46.308    "superblock": true,
00:11:46.308    "num_base_bdevs": 4,
00:11:46.308    "num_base_bdevs_discovered": 3,
00:11:46.308    "num_base_bdevs_operational": 4,
00:11:46.308    "base_bdevs_list": [
00:11:46.308      {
00:11:46.308        "name": "BaseBdev1",
00:11:46.308        "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:46.308        "is_configured": true,
00:11:46.308        "data_offset": 2048,
00:11:46.308        "data_size": 63488
00:11:46.308      },
00:11:46.308      {
00:11:46.308        "name": null,
00:11:46.308        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:46.308        "is_configured": false,
00:11:46.308        "data_offset": 0,
00:11:46.308        "data_size": 63488
00:11:46.308      },
00:11:46.308      {
00:11:46.308        "name": "BaseBdev3",
00:11:46.308        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:46.308        "is_configured": true,
00:11:46.308        "data_offset": 2048,
00:11:46.308        "data_size": 63488
00:11:46.308      },
00:11:46.308      {
00:11:46.308        "name": "BaseBdev4",
00:11:46.308        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:46.308        "is_configured": true,
00:11:46.308        "data_offset": 2048,
00:11:46.308        "data_size": 63488
00:11:46.308      }
00:11:46.308    ]
00:11:46.308  }'
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:46.308   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.568    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:46.568    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:46.568    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:11:46.568    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.568    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:46.568   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:11:46.568   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.827  [2024-12-16 11:33:12.637921] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:46.827   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:46.827    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:46.827    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:46.827    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:46.827    11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:46.828    11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:46.828   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:46.828    "name": "Existed_Raid",
00:11:46.828    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:46.828    "strip_size_kb": 64,
00:11:46.828    "state": "configuring",
00:11:46.828    "raid_level": "concat",
00:11:46.828    "superblock": true,
00:11:46.828    "num_base_bdevs": 4,
00:11:46.828    "num_base_bdevs_discovered": 2,
00:11:46.828    "num_base_bdevs_operational": 4,
00:11:46.828    "base_bdevs_list": [
00:11:46.828      {
00:11:46.828        "name": null,
00:11:46.828        "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:46.828        "is_configured": false,
00:11:46.828        "data_offset": 0,
00:11:46.828        "data_size": 63488
00:11:46.828      },
00:11:46.828      {
00:11:46.828        "name": null,
00:11:46.828        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:46.828        "is_configured": false,
00:11:46.828        "data_offset": 0,
00:11:46.828        "data_size": 63488
00:11:46.828      },
00:11:46.828      {
00:11:46.828        "name": "BaseBdev3",
00:11:46.828        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:46.828        "is_configured": true,
00:11:46.828        "data_offset": 2048,
00:11:46.828        "data_size": 63488
00:11:46.828      },
00:11:46.828      {
00:11:46.828        "name": "BaseBdev4",
00:11:46.828        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:46.828        "is_configured": true,
00:11:46.828        "data_offset": 2048,
00:11:46.828        "data_size": 63488
00:11:46.828      }
00:11:46.828    ]
00:11:46.828  }'
00:11:46.828   11:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:46.828   11:33:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.093  [2024-12-16 11:33:13.119992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:47.093   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.093    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.362   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:47.362    "name": "Existed_Raid",
00:11:47.362    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:47.362    "strip_size_kb": 64,
00:11:47.362    "state": "configuring",
00:11:47.362    "raid_level": "concat",
00:11:47.362    "superblock": true,
00:11:47.362    "num_base_bdevs": 4,
00:11:47.362    "num_base_bdevs_discovered": 3,
00:11:47.362    "num_base_bdevs_operational": 4,
00:11:47.362    "base_bdevs_list": [
00:11:47.362      {
00:11:47.362        "name": null,
00:11:47.362        "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:47.362        "is_configured": false,
00:11:47.362        "data_offset": 0,
00:11:47.362        "data_size": 63488
00:11:47.362      },
00:11:47.362      {
00:11:47.362        "name": "BaseBdev2",
00:11:47.362        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:47.362        "is_configured": true,
00:11:47.362        "data_offset": 2048,
00:11:47.362        "data_size": 63488
00:11:47.362      },
00:11:47.362      {
00:11:47.362        "name": "BaseBdev3",
00:11:47.362        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:47.362        "is_configured": true,
00:11:47.362        "data_offset": 2048,
00:11:47.362        "data_size": 63488
00:11:47.362      },
00:11:47.362      {
00:11:47.362        "name": "BaseBdev4",
00:11:47.362        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:47.362        "is_configured": true,
00:11:47.362        "data_offset": 2048,
00:11:47.362        "data_size": 63488
00:11:47.362      }
00:11:47.362    ]
00:11:47.362  }'
00:11:47.362   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:47.362   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.621    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.622    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 53415298-2dfd-44fd-ab7e-b80bd5c46956
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.622  [2024-12-16 11:33:13.666226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:11:47.622  [2024-12-16 11:33:13.666499] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:11:47.622  NewBaseBdev
00:11:47.622  [2024-12-16 11:33:13.666580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:47.622  [2024-12-16 11:33:13.666872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:11:47.622  [2024-12-16 11:33:13.667002] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:11:47.622  [2024-12-16 11:33:13.667016] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:11:47.622  [2024-12-16 11:33:13.667121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.622   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.881  [
00:11:47.881  {
00:11:47.881  "name": "NewBaseBdev",
00:11:47.881  "aliases": [
00:11:47.881  "53415298-2dfd-44fd-ab7e-b80bd5c46956"
00:11:47.881  ],
00:11:47.881  "product_name": "Malloc disk",
00:11:47.881  "block_size": 512,
00:11:47.881  "num_blocks": 65536,
00:11:47.881  "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:47.881  "assigned_rate_limits": {
00:11:47.881  "rw_ios_per_sec": 0,
00:11:47.881  "rw_mbytes_per_sec": 0,
00:11:47.881  "r_mbytes_per_sec": 0,
00:11:47.881  "w_mbytes_per_sec": 0
00:11:47.881  },
00:11:47.881  "claimed": true,
00:11:47.881  "claim_type": "exclusive_write",
00:11:47.881  "zoned": false,
00:11:47.881  "supported_io_types": {
00:11:47.881  "read": true,
00:11:47.881  "write": true,
00:11:47.881  "unmap": true,
00:11:47.881  "flush": true,
00:11:47.881  "reset": true,
00:11:47.881  "nvme_admin": false,
00:11:47.881  "nvme_io": false,
00:11:47.881  "nvme_io_md": false,
00:11:47.881  "write_zeroes": true,
00:11:47.881  "zcopy": true,
00:11:47.881  "get_zone_info": false,
00:11:47.881  "zone_management": false,
00:11:47.881  "zone_append": false,
00:11:47.881  "compare": false,
00:11:47.881  "compare_and_write": false,
00:11:47.881  "abort": true,
00:11:47.881  "seek_hole": false,
00:11:47.881  "seek_data": false,
00:11:47.881  "copy": true,
00:11:47.881  "nvme_iov_md": false
00:11:47.881  },
00:11:47.881  "memory_domains": [
00:11:47.881  {
00:11:47.881  "dma_device_id": "system",
00:11:47.881  "dma_device_type": 1
00:11:47.881  },
00:11:47.881  {
00:11:47.881  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:47.881  "dma_device_type": 2
00:11:47.881  }
00:11:47.881  ],
00:11:47.881  "driver_specific": {}
00:11:47.881  }
00:11:47.881  ]
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:47.881   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:47.881    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:47.881    11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:11:47.882    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:47.882    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:47.882    11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:47.882   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:47.882    "name": "Existed_Raid",
00:11:47.882    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:47.882    "strip_size_kb": 64,
00:11:47.882    "state": "online",
00:11:47.882    "raid_level": "concat",
00:11:47.882    "superblock": true,
00:11:47.882    "num_base_bdevs": 4,
00:11:47.882    "num_base_bdevs_discovered": 4,
00:11:47.882    "num_base_bdevs_operational": 4,
00:11:47.882    "base_bdevs_list": [
00:11:47.882      {
00:11:47.882        "name": "NewBaseBdev",
00:11:47.882        "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:47.882        "is_configured": true,
00:11:47.882        "data_offset": 2048,
00:11:47.882        "data_size": 63488
00:11:47.882      },
00:11:47.882      {
00:11:47.882        "name": "BaseBdev2",
00:11:47.882        "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:47.882        "is_configured": true,
00:11:47.882        "data_offset": 2048,
00:11:47.882        "data_size": 63488
00:11:47.882      },
00:11:47.882      {
00:11:47.882        "name": "BaseBdev3",
00:11:47.882        "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:47.882        "is_configured": true,
00:11:47.882        "data_offset": 2048,
00:11:47.882        "data_size": 63488
00:11:47.882      },
00:11:47.882      {
00:11:47.882        "name": "BaseBdev4",
00:11:47.882        "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:47.882        "is_configured": true,
00:11:47.882        "data_offset": 2048,
00:11:47.882        "data_size": 63488
00:11:47.882      }
00:11:47.882    ]
00:11:47.882  }'
00:11:47.882   11:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:47.882   11:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.141   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:11:48.141   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:11:48.141   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:48.141   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:48.141   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:11:48.141   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:48.141    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:11:48.141    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:48.141    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.141    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:48.141  [2024-12-16 11:33:14.141867] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:48.141    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:48.141   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:48.141    "name": "Existed_Raid",
00:11:48.141    "aliases": [
00:11:48.141      "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc"
00:11:48.141    ],
00:11:48.141    "product_name": "Raid Volume",
00:11:48.141    "block_size": 512,
00:11:48.141    "num_blocks": 253952,
00:11:48.141    "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:48.141    "assigned_rate_limits": {
00:11:48.141      "rw_ios_per_sec": 0,
00:11:48.141      "rw_mbytes_per_sec": 0,
00:11:48.141      "r_mbytes_per_sec": 0,
00:11:48.141      "w_mbytes_per_sec": 0
00:11:48.141    },
00:11:48.141    "claimed": false,
00:11:48.141    "zoned": false,
00:11:48.141    "supported_io_types": {
00:11:48.141      "read": true,
00:11:48.141      "write": true,
00:11:48.141      "unmap": true,
00:11:48.141      "flush": true,
00:11:48.141      "reset": true,
00:11:48.141      "nvme_admin": false,
00:11:48.141      "nvme_io": false,
00:11:48.141      "nvme_io_md": false,
00:11:48.141      "write_zeroes": true,
00:11:48.141      "zcopy": false,
00:11:48.141      "get_zone_info": false,
00:11:48.141      "zone_management": false,
00:11:48.141      "zone_append": false,
00:11:48.141      "compare": false,
00:11:48.141      "compare_and_write": false,
00:11:48.141      "abort": false,
00:11:48.141      "seek_hole": false,
00:11:48.141      "seek_data": false,
00:11:48.141      "copy": false,
00:11:48.141      "nvme_iov_md": false
00:11:48.141    },
00:11:48.141    "memory_domains": [
00:11:48.141      {
00:11:48.141        "dma_device_id": "system",
00:11:48.141        "dma_device_type": 1
00:11:48.141      },
00:11:48.141      {
00:11:48.141        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:48.141        "dma_device_type": 2
00:11:48.141      },
00:11:48.141      {
00:11:48.141        "dma_device_id": "system",
00:11:48.141        "dma_device_type": 1
00:11:48.141      },
00:11:48.141      {
00:11:48.141        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:48.141        "dma_device_type": 2
00:11:48.141      },
00:11:48.141      {
00:11:48.141        "dma_device_id": "system",
00:11:48.141        "dma_device_type": 1
00:11:48.141      },
00:11:48.141      {
00:11:48.141        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:48.141        "dma_device_type": 2
00:11:48.141      },
00:11:48.141      {
00:11:48.141        "dma_device_id": "system",
00:11:48.141        "dma_device_type": 1
00:11:48.141      },
00:11:48.141      {
00:11:48.141        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:48.141        "dma_device_type": 2
00:11:48.141      }
00:11:48.141    ],
00:11:48.141    "driver_specific": {
00:11:48.141      "raid": {
00:11:48.141        "uuid": "2a198cd1-6e5e-4e2a-961c-be2d3e9017cc",
00:11:48.141        "strip_size_kb": 64,
00:11:48.141        "state": "online",
00:11:48.141        "raid_level": "concat",
00:11:48.141        "superblock": true,
00:11:48.141        "num_base_bdevs": 4,
00:11:48.141        "num_base_bdevs_discovered": 4,
00:11:48.141        "num_base_bdevs_operational": 4,
00:11:48.141        "base_bdevs_list": [
00:11:48.141          {
00:11:48.141            "name": "NewBaseBdev",
00:11:48.141            "uuid": "53415298-2dfd-44fd-ab7e-b80bd5c46956",
00:11:48.141            "is_configured": true,
00:11:48.141            "data_offset": 2048,
00:11:48.141            "data_size": 63488
00:11:48.141          },
00:11:48.141          {
00:11:48.141            "name": "BaseBdev2",
00:11:48.141            "uuid": "cd364b18-be3c-4508-8104-6f86596fd68e",
00:11:48.141            "is_configured": true,
00:11:48.141            "data_offset": 2048,
00:11:48.141            "data_size": 63488
00:11:48.141          },
00:11:48.141          {
00:11:48.141            "name": "BaseBdev3",
00:11:48.141            "uuid": "1d23088b-b64f-4990-a6dc-c9d6c1ad22ac",
00:11:48.141            "is_configured": true,
00:11:48.141            "data_offset": 2048,
00:11:48.141            "data_size": 63488
00:11:48.141          },
00:11:48.141          {
00:11:48.141            "name": "BaseBdev4",
00:11:48.141            "uuid": "af71d848-9ec3-42ad-b277-153b1e873285",
00:11:48.141            "is_configured": true,
00:11:48.141            "data_offset": 2048,
00:11:48.141            "data_size": 63488
00:11:48.141          }
00:11:48.141        ]
00:11:48.141      }
00:11:48.141    }
00:11:48.141  }'
00:11:48.141    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:11:48.401  BaseBdev2
00:11:48.401  BaseBdev3
00:11:48.401  BaseBdev4'
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.401  [2024-12-16 11:33:14.424948] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:11:48.401  [2024-12-16 11:33:14.425028] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:48.401  [2024-12-16 11:33:14.425138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:48.401  [2024-12-16 11:33:14.425224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:48.401  [2024-12-16 11:33:14.425251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83150
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83150 ']'
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83150
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:11:48.401   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:48.401    11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83150
00:11:48.661   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:48.661   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:48.661   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83150'
00:11:48.661  killing process with pid 83150
00:11:48.661   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83150
00:11:48.661  [2024-12-16 11:33:14.472473] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:48.661   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83150
00:11:48.661  [2024-12-16 11:33:14.514183] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:48.921   11:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:11:48.921  
00:11:48.921  real	0m9.757s
00:11:48.921  user	0m16.663s
00:11:48.921  sys	0m2.055s
00:11:48.921   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:48.921  ************************************
00:11:48.921  END TEST raid_state_function_test_sb
00:11:48.921  ************************************
00:11:48.921   11:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:11:48.921   11:33:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4
00:11:48.921   11:33:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:11:48.921   11:33:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:48.921   11:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:48.921  ************************************
00:11:48.921  START TEST raid_superblock_test
00:11:48.921  ************************************
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:11:48.921   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']'
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83798
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83798
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83798 ']'
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:48.922  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:48.922   11:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:48.922  [2024-12-16 11:33:14.912451] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:48.922  [2024-12-16 11:33:14.912699] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83798 ]
00:11:49.199  [2024-12-16 11:33:15.071863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:49.199  [2024-12-16 11:33:15.121263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:49.199  [2024-12-16 11:33:15.164638] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:49.199  [2024-12-16 11:33:15.164681] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:49.773  malloc1
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:49.773  [2024-12-16 11:33:15.799682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:11:49.773  [2024-12-16 11:33:15.799812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:49.773  [2024-12-16 11:33:15.799854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:11:49.773  [2024-12-16 11:33:15.799902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:49.773  [2024-12-16 11:33:15.802073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:49.773  [2024-12-16 11:33:15.802151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:11:49.773  pt1
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:49.773  malloc2
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:49.773   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.034  [2024-12-16 11:33:15.841698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:11:50.034  [2024-12-16 11:33:15.841810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:50.034  [2024-12-16 11:33:15.841834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:11:50.034  [2024-12-16 11:33:15.841846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:50.034  [2024-12-16 11:33:15.844347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:50.034  [2024-12-16 11:33:15.844393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:11:50.034  pt2
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.034  malloc3
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.034  [2024-12-16 11:33:15.870435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:11:50.034  [2024-12-16 11:33:15.870545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:50.034  [2024-12-16 11:33:15.870591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:11:50.034  [2024-12-16 11:33:15.870622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:50.034  [2024-12-16 11:33:15.872806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:50.034  [2024-12-16 11:33:15.872876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:11:50.034  pt3
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.034  malloc4
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.034  [2024-12-16 11:33:15.903231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:11:50.034  [2024-12-16 11:33:15.903346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:50.034  [2024-12-16 11:33:15.903384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:11:50.034  [2024-12-16 11:33:15.903420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:50.034  [2024-12-16 11:33:15.905608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:50.034  [2024-12-16 11:33:15.905677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:11:50.034  pt4
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.034  [2024-12-16 11:33:15.915292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:11:50.034  [2024-12-16 11:33:15.917230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:11:50.034  [2024-12-16 11:33:15.917327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:11:50.034  [2024-12-16 11:33:15.917412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:11:50.034  [2024-12-16 11:33:15.917659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:11:50.034  [2024-12-16 11:33:15.917713] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:50.034  [2024-12-16 11:33:15.918035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:50.034  [2024-12-16 11:33:15.918230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:11:50.034  [2024-12-16 11:33:15.918277] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:11:50.034  [2024-12-16 11:33:15.918455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:50.034    11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:50.034    11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.034    11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.034    11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:50.034    11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:50.034    "name": "raid_bdev1",
00:11:50.034    "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:50.034    "strip_size_kb": 64,
00:11:50.034    "state": "online",
00:11:50.034    "raid_level": "concat",
00:11:50.034    "superblock": true,
00:11:50.034    "num_base_bdevs": 4,
00:11:50.034    "num_base_bdevs_discovered": 4,
00:11:50.034    "num_base_bdevs_operational": 4,
00:11:50.034    "base_bdevs_list": [
00:11:50.034      {
00:11:50.034        "name": "pt1",
00:11:50.034        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:50.034        "is_configured": true,
00:11:50.034        "data_offset": 2048,
00:11:50.034        "data_size": 63488
00:11:50.034      },
00:11:50.034      {
00:11:50.034        "name": "pt2",
00:11:50.034        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:50.034        "is_configured": true,
00:11:50.034        "data_offset": 2048,
00:11:50.034        "data_size": 63488
00:11:50.034      },
00:11:50.034      {
00:11:50.034        "name": "pt3",
00:11:50.034        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:50.034        "is_configured": true,
00:11:50.034        "data_offset": 2048,
00:11:50.034        "data_size": 63488
00:11:50.034      },
00:11:50.034      {
00:11:50.034        "name": "pt4",
00:11:50.034        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:50.034        "is_configured": true,
00:11:50.034        "data_offset": 2048,
00:11:50.034        "data_size": 63488
00:11:50.034      }
00:11:50.034    ]
00:11:50.034  }'
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:50.034   11:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.294   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:11:50.294   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:11:50.294   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:50.294   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:50.294   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:50.294   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:50.294    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:50.294    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.294    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.294    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:50.294  [2024-12-16 11:33:16.350886] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:50.553    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.553   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:50.553    "name": "raid_bdev1",
00:11:50.553    "aliases": [
00:11:50.553      "b9dcdb7d-5481-4380-acff-f818fa630e1b"
00:11:50.553    ],
00:11:50.553    "product_name": "Raid Volume",
00:11:50.553    "block_size": 512,
00:11:50.553    "num_blocks": 253952,
00:11:50.553    "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:50.553    "assigned_rate_limits": {
00:11:50.553      "rw_ios_per_sec": 0,
00:11:50.553      "rw_mbytes_per_sec": 0,
00:11:50.553      "r_mbytes_per_sec": 0,
00:11:50.553      "w_mbytes_per_sec": 0
00:11:50.553    },
00:11:50.553    "claimed": false,
00:11:50.553    "zoned": false,
00:11:50.553    "supported_io_types": {
00:11:50.553      "read": true,
00:11:50.553      "write": true,
00:11:50.553      "unmap": true,
00:11:50.553      "flush": true,
00:11:50.553      "reset": true,
00:11:50.553      "nvme_admin": false,
00:11:50.553      "nvme_io": false,
00:11:50.553      "nvme_io_md": false,
00:11:50.553      "write_zeroes": true,
00:11:50.553      "zcopy": false,
00:11:50.553      "get_zone_info": false,
00:11:50.553      "zone_management": false,
00:11:50.553      "zone_append": false,
00:11:50.553      "compare": false,
00:11:50.553      "compare_and_write": false,
00:11:50.553      "abort": false,
00:11:50.553      "seek_hole": false,
00:11:50.553      "seek_data": false,
00:11:50.553      "copy": false,
00:11:50.553      "nvme_iov_md": false
00:11:50.553    },
00:11:50.553    "memory_domains": [
00:11:50.553      {
00:11:50.553        "dma_device_id": "system",
00:11:50.553        "dma_device_type": 1
00:11:50.553      },
00:11:50.553      {
00:11:50.553        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:50.553        "dma_device_type": 2
00:11:50.553      },
00:11:50.553      {
00:11:50.553        "dma_device_id": "system",
00:11:50.553        "dma_device_type": 1
00:11:50.553      },
00:11:50.553      {
00:11:50.553        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:50.553        "dma_device_type": 2
00:11:50.553      },
00:11:50.553      {
00:11:50.553        "dma_device_id": "system",
00:11:50.553        "dma_device_type": 1
00:11:50.553      },
00:11:50.553      {
00:11:50.553        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:50.553        "dma_device_type": 2
00:11:50.553      },
00:11:50.553      {
00:11:50.553        "dma_device_id": "system",
00:11:50.553        "dma_device_type": 1
00:11:50.553      },
00:11:50.553      {
00:11:50.553        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:50.553        "dma_device_type": 2
00:11:50.553      }
00:11:50.553    ],
00:11:50.553    "driver_specific": {
00:11:50.553      "raid": {
00:11:50.553        "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:50.553        "strip_size_kb": 64,
00:11:50.553        "state": "online",
00:11:50.553        "raid_level": "concat",
00:11:50.553        "superblock": true,
00:11:50.553        "num_base_bdevs": 4,
00:11:50.553        "num_base_bdevs_discovered": 4,
00:11:50.553        "num_base_bdevs_operational": 4,
00:11:50.553        "base_bdevs_list": [
00:11:50.553          {
00:11:50.553            "name": "pt1",
00:11:50.553            "uuid": "00000000-0000-0000-0000-000000000001",
00:11:50.553            "is_configured": true,
00:11:50.553            "data_offset": 2048,
00:11:50.553            "data_size": 63488
00:11:50.553          },
00:11:50.553          {
00:11:50.553            "name": "pt2",
00:11:50.553            "uuid": "00000000-0000-0000-0000-000000000002",
00:11:50.553            "is_configured": true,
00:11:50.553            "data_offset": 2048,
00:11:50.553            "data_size": 63488
00:11:50.553          },
00:11:50.553          {
00:11:50.553            "name": "pt3",
00:11:50.553            "uuid": "00000000-0000-0000-0000-000000000003",
00:11:50.554            "is_configured": true,
00:11:50.554            "data_offset": 2048,
00:11:50.554            "data_size": 63488
00:11:50.554          },
00:11:50.554          {
00:11:50.554            "name": "pt4",
00:11:50.554            "uuid": "00000000-0000-0000-0000-000000000004",
00:11:50.554            "is_configured": true,
00:11:50.554            "data_offset": 2048,
00:11:50.554            "data_size": 63488
00:11:50.554          }
00:11:50.554        ]
00:11:50.554      }
00:11:50.554    }
00:11:50.554  }'
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:11:50.554  pt2
00:11:50.554  pt3
00:11:50.554  pt4'
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:50.554   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.554    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:11:50.814  [2024-12-16 11:33:16.658305] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b9dcdb7d-5481-4380-acff-f818fa630e1b
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b9dcdb7d-5481-4380-acff-f818fa630e1b ']'
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.814  [2024-12-16 11:33:16.701922] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:50.814  [2024-12-16 11:33:16.702014] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:50.814  [2024-12-16 11:33:16.702137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:50.814  [2024-12-16 11:33:16.702253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:50.814  [2024-12-16 11:33:16.702317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.814    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:11:50.814   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:50.815  [2024-12-16 11:33:16.857704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:11:50.815  [2024-12-16 11:33:16.859777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:11:50.815  [2024-12-16 11:33:16.859871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:11:50.815  [2024-12-16 11:33:16.859925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:11:50.815  [2024-12-16 11:33:16.859978] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:11:50.815  [2024-12-16 11:33:16.860032] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:11:50.815  [2024-12-16 11:33:16.860056] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:11:50.815  [2024-12-16 11:33:16.860086] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4
00:11:50.815  [2024-12-16 11:33:16.860102] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:50.815  [2024-12-16 11:33:16.860113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:11:50.815  request:
00:11:50.815  {
00:11:50.815  "name": "raid_bdev1",
00:11:50.815  "raid_level": "concat",
00:11:50.815  "base_bdevs": [
00:11:50.815  "malloc1",
00:11:50.815  "malloc2",
00:11:50.815  "malloc3",
00:11:50.815  "malloc4"
00:11:50.815  ],
00:11:50.815  "strip_size_kb": 64,
00:11:50.815  "superblock": false,
00:11:50.815  "method": "bdev_raid_create",
00:11:50.815  "req_id": 1
00:11:50.815  }
00:11:50.815  Got JSON-RPC error response
00:11:50.815  response:
00:11:50.815  {
00:11:50.815  "code": -17,
00:11:50.815  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:11:50.815  }
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:11:50.815   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:11:50.815    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.074    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.074  [2024-12-16 11:33:16.921498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:11:51.074  [2024-12-16 11:33:16.921606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:51.074  [2024-12-16 11:33:16.921668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:11:51.074  [2024-12-16 11:33:16.921705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:51.074  [2024-12-16 11:33:16.924085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:51.074  [2024-12-16 11:33:16.924167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:11:51.074  [2024-12-16 11:33:16.924298] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:11:51.074  [2024-12-16 11:33:16.924380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:11:51.074  pt1
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:51.074    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:51.074    11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:51.074    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.074    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.074    11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:51.074    "name": "raid_bdev1",
00:11:51.074    "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:51.074    "strip_size_kb": 64,
00:11:51.074    "state": "configuring",
00:11:51.074    "raid_level": "concat",
00:11:51.074    "superblock": true,
00:11:51.074    "num_base_bdevs": 4,
00:11:51.074    "num_base_bdevs_discovered": 1,
00:11:51.074    "num_base_bdevs_operational": 4,
00:11:51.074    "base_bdevs_list": [
00:11:51.074      {
00:11:51.074        "name": "pt1",
00:11:51.074        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:51.074        "is_configured": true,
00:11:51.074        "data_offset": 2048,
00:11:51.074        "data_size": 63488
00:11:51.074      },
00:11:51.074      {
00:11:51.074        "name": null,
00:11:51.074        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:51.074        "is_configured": false,
00:11:51.074        "data_offset": 2048,
00:11:51.074        "data_size": 63488
00:11:51.074      },
00:11:51.074      {
00:11:51.074        "name": null,
00:11:51.074        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:51.074        "is_configured": false,
00:11:51.074        "data_offset": 2048,
00:11:51.074        "data_size": 63488
00:11:51.074      },
00:11:51.074      {
00:11:51.074        "name": null,
00:11:51.074        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:51.074        "is_configured": false,
00:11:51.074        "data_offset": 2048,
00:11:51.074        "data_size": 63488
00:11:51.074      }
00:11:51.074    ]
00:11:51.074  }'
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:51.074   11:33:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']'
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.334  [2024-12-16 11:33:17.328844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:11:51.334  [2024-12-16 11:33:17.328990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:51.334  [2024-12-16 11:33:17.329036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:11:51.334  [2024-12-16 11:33:17.329077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:51.334  [2024-12-16 11:33:17.329602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:51.334  [2024-12-16 11:33:17.329666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:11:51.334  [2024-12-16 11:33:17.329780] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:11:51.334  [2024-12-16 11:33:17.329834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:11:51.334  pt2
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.334  [2024-12-16 11:33:17.336823] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:51.334    11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:51.334    11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.334    11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.334    11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:51.334    11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:51.334    "name": "raid_bdev1",
00:11:51.334    "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:51.334    "strip_size_kb": 64,
00:11:51.334    "state": "configuring",
00:11:51.334    "raid_level": "concat",
00:11:51.334    "superblock": true,
00:11:51.334    "num_base_bdevs": 4,
00:11:51.334    "num_base_bdevs_discovered": 1,
00:11:51.334    "num_base_bdevs_operational": 4,
00:11:51.334    "base_bdevs_list": [
00:11:51.334      {
00:11:51.334        "name": "pt1",
00:11:51.334        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:51.334        "is_configured": true,
00:11:51.334        "data_offset": 2048,
00:11:51.334        "data_size": 63488
00:11:51.334      },
00:11:51.334      {
00:11:51.334        "name": null,
00:11:51.334        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:51.334        "is_configured": false,
00:11:51.334        "data_offset": 0,
00:11:51.334        "data_size": 63488
00:11:51.334      },
00:11:51.334      {
00:11:51.334        "name": null,
00:11:51.334        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:51.334        "is_configured": false,
00:11:51.334        "data_offset": 2048,
00:11:51.334        "data_size": 63488
00:11:51.334      },
00:11:51.334      {
00:11:51.334        "name": null,
00:11:51.334        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:51.334        "is_configured": false,
00:11:51.334        "data_offset": 2048,
00:11:51.334        "data_size": 63488
00:11:51.334      }
00:11:51.334    ]
00:11:51.334  }'
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:51.334   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.903  [2024-12-16 11:33:17.768122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:11:51.903  [2024-12-16 11:33:17.768199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:51.903  [2024-12-16 11:33:17.768219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:11:51.903  [2024-12-16 11:33:17.768231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:51.903  [2024-12-16 11:33:17.768692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:51.903  [2024-12-16 11:33:17.768715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:11:51.903  [2024-12-16 11:33:17.768797] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:11:51.903  [2024-12-16 11:33:17.768825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:11:51.903  pt2
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.903  [2024-12-16 11:33:17.776058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:11:51.903  [2024-12-16 11:33:17.776122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:51.903  [2024-12-16 11:33:17.776143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:11:51.903  [2024-12-16 11:33:17.776155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:51.903  [2024-12-16 11:33:17.776565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:51.903  [2024-12-16 11:33:17.776588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:11:51.903  [2024-12-16 11:33:17.776658] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:11:51.903  [2024-12-16 11:33:17.776682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:11:51.903  pt3
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.903  [2024-12-16 11:33:17.784048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:11:51.903  [2024-12-16 11:33:17.784107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:51.903  [2024-12-16 11:33:17.784127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:11:51.903  [2024-12-16 11:33:17.784138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:51.903  [2024-12-16 11:33:17.784478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:51.903  [2024-12-16 11:33:17.784497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:11:51.903  [2024-12-16 11:33:17.784584] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:11:51.903  [2024-12-16 11:33:17.784609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:11:51.903  [2024-12-16 11:33:17.784721] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:11:51.903  [2024-12-16 11:33:17.784743] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:51.903  [2024-12-16 11:33:17.785003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:11:51.903  [2024-12-16 11:33:17.785129] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:11:51.903  [2024-12-16 11:33:17.785139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:11:51.903  [2024-12-16 11:33:17.785247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:51.903  pt4
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:51.903    11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:51.903    11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:51.903    11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:51.903    11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:51.903    11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:51.903   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:51.903    "name": "raid_bdev1",
00:11:51.904    "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:51.904    "strip_size_kb": 64,
00:11:51.904    "state": "online",
00:11:51.904    "raid_level": "concat",
00:11:51.904    "superblock": true,
00:11:51.904    "num_base_bdevs": 4,
00:11:51.904    "num_base_bdevs_discovered": 4,
00:11:51.904    "num_base_bdevs_operational": 4,
00:11:51.904    "base_bdevs_list": [
00:11:51.904      {
00:11:51.904        "name": "pt1",
00:11:51.904        "uuid": "00000000-0000-0000-0000-000000000001",
00:11:51.904        "is_configured": true,
00:11:51.904        "data_offset": 2048,
00:11:51.904        "data_size": 63488
00:11:51.904      },
00:11:51.904      {
00:11:51.904        "name": "pt2",
00:11:51.904        "uuid": "00000000-0000-0000-0000-000000000002",
00:11:51.904        "is_configured": true,
00:11:51.904        "data_offset": 2048,
00:11:51.904        "data_size": 63488
00:11:51.904      },
00:11:51.904      {
00:11:51.904        "name": "pt3",
00:11:51.904        "uuid": "00000000-0000-0000-0000-000000000003",
00:11:51.904        "is_configured": true,
00:11:51.904        "data_offset": 2048,
00:11:51.904        "data_size": 63488
00:11:51.904      },
00:11:51.904      {
00:11:51.904        "name": "pt4",
00:11:51.904        "uuid": "00000000-0000-0000-0000-000000000004",
00:11:51.904        "is_configured": true,
00:11:51.904        "data_offset": 2048,
00:11:51.904        "data_size": 63488
00:11:51.904      }
00:11:51.904    ]
00:11:51.904  }'
00:11:51.904   11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:51.904   11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.474   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:11:52.474   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:11:52.474   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:11:52.474   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:11:52.474   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:11:52.474   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:11:52.474    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:52.474    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:52.474    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.474    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:11:52.474  [2024-12-16 11:33:18.243683] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:52.474    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:52.474   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:11:52.474    "name": "raid_bdev1",
00:11:52.474    "aliases": [
00:11:52.474      "b9dcdb7d-5481-4380-acff-f818fa630e1b"
00:11:52.474    ],
00:11:52.474    "product_name": "Raid Volume",
00:11:52.474    "block_size": 512,
00:11:52.474    "num_blocks": 253952,
00:11:52.474    "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:52.474    "assigned_rate_limits": {
00:11:52.474      "rw_ios_per_sec": 0,
00:11:52.474      "rw_mbytes_per_sec": 0,
00:11:52.475      "r_mbytes_per_sec": 0,
00:11:52.475      "w_mbytes_per_sec": 0
00:11:52.475    },
00:11:52.475    "claimed": false,
00:11:52.475    "zoned": false,
00:11:52.475    "supported_io_types": {
00:11:52.475      "read": true,
00:11:52.475      "write": true,
00:11:52.475      "unmap": true,
00:11:52.475      "flush": true,
00:11:52.475      "reset": true,
00:11:52.475      "nvme_admin": false,
00:11:52.475      "nvme_io": false,
00:11:52.475      "nvme_io_md": false,
00:11:52.475      "write_zeroes": true,
00:11:52.475      "zcopy": false,
00:11:52.475      "get_zone_info": false,
00:11:52.475      "zone_management": false,
00:11:52.475      "zone_append": false,
00:11:52.475      "compare": false,
00:11:52.475      "compare_and_write": false,
00:11:52.475      "abort": false,
00:11:52.475      "seek_hole": false,
00:11:52.475      "seek_data": false,
00:11:52.475      "copy": false,
00:11:52.475      "nvme_iov_md": false
00:11:52.475    },
00:11:52.475    "memory_domains": [
00:11:52.475      {
00:11:52.475        "dma_device_id": "system",
00:11:52.475        "dma_device_type": 1
00:11:52.475      },
00:11:52.475      {
00:11:52.475        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:52.475        "dma_device_type": 2
00:11:52.475      },
00:11:52.475      {
00:11:52.475        "dma_device_id": "system",
00:11:52.475        "dma_device_type": 1
00:11:52.475      },
00:11:52.475      {
00:11:52.475        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:52.475        "dma_device_type": 2
00:11:52.475      },
00:11:52.475      {
00:11:52.475        "dma_device_id": "system",
00:11:52.475        "dma_device_type": 1
00:11:52.475      },
00:11:52.475      {
00:11:52.475        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:52.475        "dma_device_type": 2
00:11:52.475      },
00:11:52.475      {
00:11:52.475        "dma_device_id": "system",
00:11:52.475        "dma_device_type": 1
00:11:52.475      },
00:11:52.475      {
00:11:52.475        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:52.475        "dma_device_type": 2
00:11:52.475      }
00:11:52.475    ],
00:11:52.475    "driver_specific": {
00:11:52.475      "raid": {
00:11:52.475        "uuid": "b9dcdb7d-5481-4380-acff-f818fa630e1b",
00:11:52.475        "strip_size_kb": 64,
00:11:52.475        "state": "online",
00:11:52.475        "raid_level": "concat",
00:11:52.475        "superblock": true,
00:11:52.475        "num_base_bdevs": 4,
00:11:52.475        "num_base_bdevs_discovered": 4,
00:11:52.475        "num_base_bdevs_operational": 4,
00:11:52.475        "base_bdevs_list": [
00:11:52.475          {
00:11:52.475            "name": "pt1",
00:11:52.475            "uuid": "00000000-0000-0000-0000-000000000001",
00:11:52.475            "is_configured": true,
00:11:52.475            "data_offset": 2048,
00:11:52.475            "data_size": 63488
00:11:52.475          },
00:11:52.475          {
00:11:52.475            "name": "pt2",
00:11:52.475            "uuid": "00000000-0000-0000-0000-000000000002",
00:11:52.475            "is_configured": true,
00:11:52.475            "data_offset": 2048,
00:11:52.475            "data_size": 63488
00:11:52.475          },
00:11:52.475          {
00:11:52.475            "name": "pt3",
00:11:52.475            "uuid": "00000000-0000-0000-0000-000000000003",
00:11:52.475            "is_configured": true,
00:11:52.475            "data_offset": 2048,
00:11:52.475            "data_size": 63488
00:11:52.475          },
00:11:52.475          {
00:11:52.475            "name": "pt4",
00:11:52.475            "uuid": "00000000-0000-0000-0000-000000000004",
00:11:52.475            "is_configured": true,
00:11:52.475            "data_offset": 2048,
00:11:52.475            "data_size": 63488
00:11:52.475          }
00:11:52.475        ]
00:11:52.475      }
00:11:52.475    }
00:11:52.475  }'
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:11:52.475  pt2
00:11:52.475  pt3
00:11:52.475  pt4'
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.475    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:11:52.475   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:11:52.733    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:11:52.733    11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:11:52.733    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:52.734    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.734  [2024-12-16 11:33:18.547109] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:11:52.734    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b9dcdb7d-5481-4380-acff-f818fa630e1b '!=' b9dcdb7d-5481-4380-acff-f818fa630e1b ']'
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83798
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83798 ']'
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83798
00:11:52.734    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:52.734    11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83798
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:52.734  killing process with pid 83798
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83798'
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83798
00:11:52.734  [2024-12-16 11:33:18.633483] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:52.734  [2024-12-16 11:33:18.633606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:52.734   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83798
00:11:52.734  [2024-12-16 11:33:18.633687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:52.734  [2024-12-16 11:33:18.633700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:11:52.734  [2024-12-16 11:33:18.679374] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:52.992  ************************************
00:11:52.992  END TEST raid_superblock_test
00:11:52.992  ************************************
00:11:52.992   11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:11:52.992  
00:11:52.992  real	0m4.091s
00:11:52.992  user	0m6.461s
00:11:52.992  sys	0m0.862s
00:11:52.992   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:52.992   11:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:11:52.992   11:33:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read
00:11:52.992   11:33:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:52.992   11:33:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:52.992   11:33:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:52.992  ************************************
00:11:52.992  START TEST raid_read_error_test
00:11:52.992  ************************************
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']'
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:11:52.992    11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v1ms9Bt8gH
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84046
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84046
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 84046 ']'
00:11:52.992   11:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:52.993   11:33:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:11:52.993  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:52.993   11:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:52.993   11:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:52.993   11:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:52.993   11:33:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:53.251  [2024-12-16 11:33:19.088998] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:53.251  [2024-12-16 11:33:19.089186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84046 ]
00:11:53.251  [2024-12-16 11:33:19.251351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:53.251  [2024-12-16 11:33:19.304181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:53.510  [2024-12-16 11:33:19.348145] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:53.510  [2024-12-16 11:33:19.348289] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  BaseBdev1_malloc
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  true
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  [2024-12-16 11:33:19.959462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:11:54.077  [2024-12-16 11:33:19.959550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:54.077  [2024-12-16 11:33:19.959578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:11:54.077  [2024-12-16 11:33:19.959588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:54.077  [2024-12-16 11:33:19.961931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:54.077  [2024-12-16 11:33:19.961973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:11:54.077  BaseBdev1
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  BaseBdev2_malloc
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  true
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  [2024-12-16 11:33:20.002970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:11:54.077  [2024-12-16 11:33:20.003025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:54.077  [2024-12-16 11:33:20.003045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:11:54.077  [2024-12-16 11:33:20.003054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:54.077  [2024-12-16 11:33:20.005262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:54.077  [2024-12-16 11:33:20.005340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:11:54.077  BaseBdev2
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  BaseBdev3_malloc
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  true
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  [2024-12-16 11:33:20.036079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:11:54.077  [2024-12-16 11:33:20.036194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:54.077  [2024-12-16 11:33:20.036222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:11:54.077  [2024-12-16 11:33:20.036232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:54.077  [2024-12-16 11:33:20.038446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:54.077  [2024-12-16 11:33:20.038482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:11:54.077  BaseBdev3
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  BaseBdev4_malloc
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  true
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  [2024-12-16 11:33:20.069058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc
00:11:54.077  [2024-12-16 11:33:20.069117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:54.077  [2024-12-16 11:33:20.069144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:11:54.077  [2024-12-16 11:33:20.069154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:54.077  [2024-12-16 11:33:20.071544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:54.077  [2024-12-16 11:33:20.071605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:11:54.077  BaseBdev4
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077  [2024-12-16 11:33:20.077112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:54.077  [2024-12-16 11:33:20.079142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:54.077  [2024-12-16 11:33:20.079235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:54.077  [2024-12-16 11:33:20.079313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:54.077  [2024-12-16 11:33:20.079616] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080
00:11:54.077  [2024-12-16 11:33:20.079657] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:54.077  [2024-12-16 11:33:20.080001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:54.077  [2024-12-16 11:33:20.080212] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080
00:11:54.077  [2024-12-16 11:33:20.080266] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080
00:11:54.077  [2024-12-16 11:33:20.080473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:54.077    11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:54.077    11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:54.077    11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.077    11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.077    11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:54.077    "name": "raid_bdev1",
00:11:54.077    "uuid": "03fa546d-1393-420f-8924-86a3896860f7",
00:11:54.077    "strip_size_kb": 64,
00:11:54.077    "state": "online",
00:11:54.077    "raid_level": "concat",
00:11:54.077    "superblock": true,
00:11:54.077    "num_base_bdevs": 4,
00:11:54.077    "num_base_bdevs_discovered": 4,
00:11:54.077    "num_base_bdevs_operational": 4,
00:11:54.077    "base_bdevs_list": [
00:11:54.077      {
00:11:54.077        "name": "BaseBdev1",
00:11:54.077        "uuid": "4a5bb895-3258-5a03-a7bb-7866870a15e0",
00:11:54.077        "is_configured": true,
00:11:54.077        "data_offset": 2048,
00:11:54.077        "data_size": 63488
00:11:54.077      },
00:11:54.077      {
00:11:54.077        "name": "BaseBdev2",
00:11:54.077        "uuid": "42e4d3fb-0c3d-58f4-a3ad-78c0d538458c",
00:11:54.077        "is_configured": true,
00:11:54.077        "data_offset": 2048,
00:11:54.077        "data_size": 63488
00:11:54.077      },
00:11:54.077      {
00:11:54.077        "name": "BaseBdev3",
00:11:54.077        "uuid": "f8bbd868-9389-5924-8180-e53a68353daa",
00:11:54.077        "is_configured": true,
00:11:54.077        "data_offset": 2048,
00:11:54.077        "data_size": 63488
00:11:54.077      },
00:11:54.077      {
00:11:54.077        "name": "BaseBdev4",
00:11:54.077        "uuid": "57290f5a-db47-5391-938b-02c1ae1f132b",
00:11:54.077        "is_configured": true,
00:11:54.077        "data_offset": 2048,
00:11:54.077        "data_size": 63488
00:11:54.077      }
00:11:54.077    ]
00:11:54.077  }'
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:54.077   11:33:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:54.645   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:11:54.645   11:33:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:54.645  [2024-12-16 11:33:20.648599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]]
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:55.581    11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:55.581    11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:55.581    11:33:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:55.581    11:33:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:55.581    11:33:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:55.581    "name": "raid_bdev1",
00:11:55.581    "uuid": "03fa546d-1393-420f-8924-86a3896860f7",
00:11:55.581    "strip_size_kb": 64,
00:11:55.581    "state": "online",
00:11:55.581    "raid_level": "concat",
00:11:55.581    "superblock": true,
00:11:55.581    "num_base_bdevs": 4,
00:11:55.581    "num_base_bdevs_discovered": 4,
00:11:55.581    "num_base_bdevs_operational": 4,
00:11:55.581    "base_bdevs_list": [
00:11:55.581      {
00:11:55.581        "name": "BaseBdev1",
00:11:55.581        "uuid": "4a5bb895-3258-5a03-a7bb-7866870a15e0",
00:11:55.581        "is_configured": true,
00:11:55.581        "data_offset": 2048,
00:11:55.581        "data_size": 63488
00:11:55.581      },
00:11:55.581      {
00:11:55.581        "name": "BaseBdev2",
00:11:55.581        "uuid": "42e4d3fb-0c3d-58f4-a3ad-78c0d538458c",
00:11:55.581        "is_configured": true,
00:11:55.581        "data_offset": 2048,
00:11:55.581        "data_size": 63488
00:11:55.581      },
00:11:55.581      {
00:11:55.581        "name": "BaseBdev3",
00:11:55.581        "uuid": "f8bbd868-9389-5924-8180-e53a68353daa",
00:11:55.581        "is_configured": true,
00:11:55.581        "data_offset": 2048,
00:11:55.581        "data_size": 63488
00:11:55.581      },
00:11:55.581      {
00:11:55.581        "name": "BaseBdev4",
00:11:55.581        "uuid": "57290f5a-db47-5391-938b-02c1ae1f132b",
00:11:55.581        "is_configured": true,
00:11:55.581        "data_offset": 2048,
00:11:55.581        "data_size": 63488
00:11:55.581      }
00:11:55.581    ]
00:11:55.581  }'
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:55.581   11:33:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:56.149  [2024-12-16 11:33:22.021181] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:56.149  [2024-12-16 11:33:22.021213] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:56.149  [2024-12-16 11:33:22.023775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:56.149  [2024-12-16 11:33:22.023888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:56.149  [2024-12-16 11:33:22.023944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:56.149  [2024-12-16 11:33:22.023953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline
00:11:56.149  {
00:11:56.149    "results": [
00:11:56.149      {
00:11:56.149        "job": "raid_bdev1",
00:11:56.149        "core_mask": "0x1",
00:11:56.149        "workload": "randrw",
00:11:56.149        "percentage": 50,
00:11:56.149        "status": "finished",
00:11:56.149        "queue_depth": 1,
00:11:56.149        "io_size": 131072,
00:11:56.149        "runtime": 1.373119,
00:11:56.149        "iops": 15537.619099291467,
00:11:56.149        "mibps": 1942.2023874114334,
00:11:56.149        "io_failed": 1,
00:11:56.149        "io_timeout": 0,
00:11:56.149        "avg_latency_us": 89.29107005729088,
00:11:56.149        "min_latency_us": 26.494323144104804,
00:11:56.149        "max_latency_us": 1616.9362445414847
00:11:56.149      }
00:11:56.149    ],
00:11:56.149    "core_count": 1
00:11:56.149  }
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84046
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 84046 ']'
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 84046
00:11:56.149    11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:11:56.149   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:56.149    11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84046
00:11:56.150   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:56.150   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:56.150   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84046'
00:11:56.150  killing process with pid 84046
00:11:56.150   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 84046
00:11:56.150   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 84046
00:11:56.150  [2024-12-16 11:33:22.062217] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:56.150  [2024-12-16 11:33:22.099344] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:56.409    11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v1ms9Bt8gH
00:11:56.409    11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:11:56.409    11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:11:56.409   11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73
00:11:56.409   11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat
00:11:56.409   11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:56.409   11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:56.409   11:33:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]]
00:11:56.409  
00:11:56.409  real	0m3.369s
00:11:56.409  user	0m4.260s
00:11:56.409  sys	0m0.577s
00:11:56.409   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:56.409   11:33:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:56.409  ************************************
00:11:56.409  END TEST raid_read_error_test
00:11:56.409  ************************************
00:11:56.409   11:33:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write
00:11:56.409   11:33:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:56.409   11:33:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:56.409   11:33:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:56.409  ************************************
00:11:56.409  START TEST raid_write_error_test
00:11:56.409  ************************************
00:11:56.409   11:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write
00:11:56.409   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat
00:11:56.409   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4
00:11:56.409   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:11:56.409    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:11:56.409   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:56.409   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']'
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64'
00:11:56.410    11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pfImRjLtsB
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84181
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84181
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84181 ']'
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:56.410  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:56.410   11:33:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:11:56.669  [2024-12-16 11:33:22.523093] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:11:56.669  [2024-12-16 11:33:22.523233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84181 ]
00:11:56.669  [2024-12-16 11:33:22.684390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:56.669  [2024-12-16 11:33:22.732536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:11:56.929  [2024-12-16 11:33:22.775852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:56.929  [2024-12-16 11:33:22.775891] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  BaseBdev1_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  true
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  [2024-12-16 11:33:23.402738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:11:57.502  [2024-12-16 11:33:23.402797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:57.502  [2024-12-16 11:33:23.402840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:11:57.502  [2024-12-16 11:33:23.402851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:57.502  [2024-12-16 11:33:23.405162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:57.502  [2024-12-16 11:33:23.405199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:11:57.502  BaseBdev1
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  BaseBdev2_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  true
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  [2024-12-16 11:33:23.440766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:11:57.502  [2024-12-16 11:33:23.440821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:57.502  [2024-12-16 11:33:23.440840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:11:57.502  [2024-12-16 11:33:23.440850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:57.502  [2024-12-16 11:33:23.443102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:57.502  [2024-12-16 11:33:23.443184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:11:57.502  BaseBdev2
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  BaseBdev3_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  true
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  [2024-12-16 11:33:23.469559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:11:57.502  [2024-12-16 11:33:23.469655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:57.502  [2024-12-16 11:33:23.469679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:11:57.502  [2024-12-16 11:33:23.469688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:57.502  [2024-12-16 11:33:23.472008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:57.502  [2024-12-16 11:33:23.472050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:11:57.502  BaseBdev3
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  BaseBdev4_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  true
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  [2024-12-16 11:33:23.498275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc
00:11:57.502  [2024-12-16 11:33:23.498328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:57.502  [2024-12-16 11:33:23.498351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:11:57.502  [2024-12-16 11:33:23.498361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:57.502  [2024-12-16 11:33:23.500797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:57.502  [2024-12-16 11:33:23.500886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:11:57.502  BaseBdev4
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.502   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.502  [2024-12-16 11:33:23.506333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:11:57.502  [2024-12-16 11:33:23.508584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:11:57.502  [2024-12-16 11:33:23.508685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:11:57.502  [2024-12-16 11:33:23.508747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:11:57.502  [2024-12-16 11:33:23.508994] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080
00:11:57.502  [2024-12-16 11:33:23.509015] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:11:57.502  [2024-12-16 11:33:23.509298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:11:57.502  [2024-12-16 11:33:23.509448] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080
00:11:57.503  [2024-12-16 11:33:23.509463] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080
00:11:57.503  [2024-12-16 11:33:23.509635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:57.503    11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:57.503    11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:57.503    11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.503    11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:57.503    11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:57.503    "name": "raid_bdev1",
00:11:57.503    "uuid": "e3c2f7cf-b2bb-4087-9098-fb604a50232e",
00:11:57.503    "strip_size_kb": 64,
00:11:57.503    "state": "online",
00:11:57.503    "raid_level": "concat",
00:11:57.503    "superblock": true,
00:11:57.503    "num_base_bdevs": 4,
00:11:57.503    "num_base_bdevs_discovered": 4,
00:11:57.503    "num_base_bdevs_operational": 4,
00:11:57.503    "base_bdevs_list": [
00:11:57.503      {
00:11:57.503        "name": "BaseBdev1",
00:11:57.503        "uuid": "227c8856-26b8-52c0-8c2e-57b0765da20a",
00:11:57.503        "is_configured": true,
00:11:57.503        "data_offset": 2048,
00:11:57.503        "data_size": 63488
00:11:57.503      },
00:11:57.503      {
00:11:57.503        "name": "BaseBdev2",
00:11:57.503        "uuid": "a9b1665d-d539-5736-b9c0-1fc4fe07256d",
00:11:57.503        "is_configured": true,
00:11:57.503        "data_offset": 2048,
00:11:57.503        "data_size": 63488
00:11:57.503      },
00:11:57.503      {
00:11:57.503        "name": "BaseBdev3",
00:11:57.503        "uuid": "aa45535e-ba84-5820-978a-c4c71227a779",
00:11:57.503        "is_configured": true,
00:11:57.503        "data_offset": 2048,
00:11:57.503        "data_size": 63488
00:11:57.503      },
00:11:57.503      {
00:11:57.503        "name": "BaseBdev4",
00:11:57.503        "uuid": "25f44ec7-2206-5c5d-a505-742d6a61e62d",
00:11:57.503        "is_configured": true,
00:11:57.503        "data_offset": 2048,
00:11:57.503        "data_size": 63488
00:11:57.503      }
00:11:57.503    ]
00:11:57.503  }'
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:57.503   11:33:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:58.072   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:11:58.072   11:33:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:58.072  [2024-12-16 11:33:24.029821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]]
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:11:59.011    11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:11:59.011    11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:11:59.011    11:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:59.011    11:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:59.011    11:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:59.011   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:11:59.011    "name": "raid_bdev1",
00:11:59.011    "uuid": "e3c2f7cf-b2bb-4087-9098-fb604a50232e",
00:11:59.011    "strip_size_kb": 64,
00:11:59.011    "state": "online",
00:11:59.011    "raid_level": "concat",
00:11:59.011    "superblock": true,
00:11:59.011    "num_base_bdevs": 4,
00:11:59.011    "num_base_bdevs_discovered": 4,
00:11:59.011    "num_base_bdevs_operational": 4,
00:11:59.011    "base_bdevs_list": [
00:11:59.011      {
00:11:59.011        "name": "BaseBdev1",
00:11:59.011        "uuid": "227c8856-26b8-52c0-8c2e-57b0765da20a",
00:11:59.011        "is_configured": true,
00:11:59.011        "data_offset": 2048,
00:11:59.011        "data_size": 63488
00:11:59.011      },
00:11:59.011      {
00:11:59.011        "name": "BaseBdev2",
00:11:59.011        "uuid": "a9b1665d-d539-5736-b9c0-1fc4fe07256d",
00:11:59.011        "is_configured": true,
00:11:59.011        "data_offset": 2048,
00:11:59.011        "data_size": 63488
00:11:59.011      },
00:11:59.011      {
00:11:59.011        "name": "BaseBdev3",
00:11:59.011        "uuid": "aa45535e-ba84-5820-978a-c4c71227a779",
00:11:59.011        "is_configured": true,
00:11:59.011        "data_offset": 2048,
00:11:59.011        "data_size": 63488
00:11:59.011      },
00:11:59.011      {
00:11:59.011        "name": "BaseBdev4",
00:11:59.011        "uuid": "25f44ec7-2206-5c5d-a505-742d6a61e62d",
00:11:59.011        "is_configured": true,
00:11:59.011        "data_offset": 2048,
00:11:59.011        "data_size": 63488
00:11:59.011      }
00:11:59.012    ]
00:11:59.012  }'
00:11:59.012   11:33:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:11:59.012   11:33:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:59.587  [2024-12-16 11:33:25.430113] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:11:59.587  [2024-12-16 11:33:25.430188] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:11:59.587  [2024-12-16 11:33:25.432958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:11:59.587  [2024-12-16 11:33:25.433081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:11:59.587  [2024-12-16 11:33:25.433181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:11:59.587  [2024-12-16 11:33:25.433244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84181
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84181 ']'
00:11:59.587  {
00:11:59.587    "results": [
00:11:59.587      {
00:11:59.587        "job": "raid_bdev1",
00:11:59.587        "core_mask": "0x1",
00:11:59.587        "workload": "randrw",
00:11:59.587        "percentage": 50,
00:11:59.587        "status": "finished",
00:11:59.587        "queue_depth": 1,
00:11:59.587        "io_size": 131072,
00:11:59.587        "runtime": 1.401051,
00:11:59.587        "iops": 15196.448951537097,
00:11:59.587        "mibps": 1899.5561189421371,
00:11:59.587        "io_failed": 1,
00:11:59.587        "io_timeout": 0,
00:11:59.587        "avg_latency_us": 91.23718952194768,
00:11:59.587        "min_latency_us": 26.829694323144103,
00:11:59.587        "max_latency_us": 1459.5353711790392
00:11:59.587      }
00:11:59.587    ],
00:11:59.587    "core_count": 1
00:11:59.587  }
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84181
00:11:59.587    11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:11:59.587    11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84181
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:11:59.587  killing process with pid 84181
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84181'
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84181
00:11:59.587   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84181
00:11:59.587  [2024-12-16 11:33:25.475528] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:11:59.587  [2024-12-16 11:33:25.512795] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:11:59.847    11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:11:59.847    11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pfImRjLtsB
00:11:59.847    11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:11:59.847   11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71
00:11:59.847   11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat
00:11:59.847  ************************************
00:11:59.847  END TEST raid_write_error_test
00:11:59.847  ************************************
00:11:59.847   11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:11:59.847   11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1
00:11:59.847   11:33:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]]
00:11:59.847  
00:11:59.847  real	0m3.339s
00:11:59.847  user	0m4.230s
00:11:59.847  sys	0m0.561s
00:11:59.847   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:11:59.847   11:33:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:11:59.847   11:33:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1
00:11:59.847   11:33:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false
00:11:59.847   11:33:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:11:59.847   11:33:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:11:59.847   11:33:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:11:59.847  ************************************
00:11:59.847  START TEST raid_state_function_test
00:11:59.847  ************************************
00:11:59.847   11:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false
00:11:59.847   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:11:59.848    11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84308
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:11:59.848  Process raid pid: 84308
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84308'
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84308
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84308 ']'
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:59.848  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:11:59.848   11:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:00.108  [2024-12-16 11:33:25.926230] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:12:00.108  [2024-12-16 11:33:25.926479] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:00.108  [2024-12-16 11:33:26.089607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:00.108  [2024-12-16 11:33:26.138510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:12:00.367  [2024-12-16 11:33:26.182295] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:00.367  [2024-12-16 11:33:26.182413] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:00.936  [2024-12-16 11:33:26.776519] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:12:00.936  [2024-12-16 11:33:26.776664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:12:00.936  [2024-12-16 11:33:26.776700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:12:00.936  [2024-12-16 11:33:26.776725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:12:00.936  [2024-12-16 11:33:26.776746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:12:00.936  [2024-12-16 11:33:26.776770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:12:00.936  [2024-12-16 11:33:26.776788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:12:00.936  [2024-12-16 11:33:26.776818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:00.936   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:00.936    11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:00.936    11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:00.937    11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:00.937    11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:00.937    11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:00.937   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:00.937    "name": "Existed_Raid",
00:12:00.937    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:00.937    "strip_size_kb": 0,
00:12:00.937    "state": "configuring",
00:12:00.937    "raid_level": "raid1",
00:12:00.937    "superblock": false,
00:12:00.937    "num_base_bdevs": 4,
00:12:00.937    "num_base_bdevs_discovered": 0,
00:12:00.937    "num_base_bdevs_operational": 4,
00:12:00.937    "base_bdevs_list": [
00:12:00.937      {
00:12:00.937        "name": "BaseBdev1",
00:12:00.937        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:00.937        "is_configured": false,
00:12:00.937        "data_offset": 0,
00:12:00.937        "data_size": 0
00:12:00.937      },
00:12:00.937      {
00:12:00.937        "name": "BaseBdev2",
00:12:00.937        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:00.937        "is_configured": false,
00:12:00.937        "data_offset": 0,
00:12:00.937        "data_size": 0
00:12:00.937      },
00:12:00.937      {
00:12:00.937        "name": "BaseBdev3",
00:12:00.937        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:00.937        "is_configured": false,
00:12:00.937        "data_offset": 0,
00:12:00.937        "data_size": 0
00:12:00.937      },
00:12:00.937      {
00:12:00.937        "name": "BaseBdev4",
00:12:00.937        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:00.937        "is_configured": false,
00:12:00.937        "data_offset": 0,
00:12:00.937        "data_size": 0
00:12:00.937      }
00:12:00.937    ]
00:12:00.937  }'
00:12:00.937   11:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:00.937   11:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.195  [2024-12-16 11:33:27.235646] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:12:01.195  [2024-12-16 11:33:27.235690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.195  [2024-12-16 11:33:27.247650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:12:01.195  [2024-12-16 11:33:27.247693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:12:01.195  [2024-12-16 11:33:27.247701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:12:01.195  [2024-12-16 11:33:27.247711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:12:01.195  [2024-12-16 11:33:27.247718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:12:01.195  [2024-12-16 11:33:27.247727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:12:01.195  [2024-12-16 11:33:27.247733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:12:01.195  [2024-12-16 11:33:27.247741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.195   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.454  [2024-12-16 11:33:27.268928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:01.454  BaseBdev1
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.454  [
00:12:01.454  {
00:12:01.454  "name": "BaseBdev1",
00:12:01.454  "aliases": [
00:12:01.454  "058e009a-73ce-4de4-9f11-63600aceb3dd"
00:12:01.454  ],
00:12:01.454  "product_name": "Malloc disk",
00:12:01.454  "block_size": 512,
00:12:01.454  "num_blocks": 65536,
00:12:01.454  "uuid": "058e009a-73ce-4de4-9f11-63600aceb3dd",
00:12:01.454  "assigned_rate_limits": {
00:12:01.454  "rw_ios_per_sec": 0,
00:12:01.454  "rw_mbytes_per_sec": 0,
00:12:01.454  "r_mbytes_per_sec": 0,
00:12:01.454  "w_mbytes_per_sec": 0
00:12:01.454  },
00:12:01.454  "claimed": true,
00:12:01.454  "claim_type": "exclusive_write",
00:12:01.454  "zoned": false,
00:12:01.454  "supported_io_types": {
00:12:01.454  "read": true,
00:12:01.454  "write": true,
00:12:01.454  "unmap": true,
00:12:01.454  "flush": true,
00:12:01.454  "reset": true,
00:12:01.454  "nvme_admin": false,
00:12:01.454  "nvme_io": false,
00:12:01.454  "nvme_io_md": false,
00:12:01.454  "write_zeroes": true,
00:12:01.454  "zcopy": true,
00:12:01.454  "get_zone_info": false,
00:12:01.454  "zone_management": false,
00:12:01.454  "zone_append": false,
00:12:01.454  "compare": false,
00:12:01.454  "compare_and_write": false,
00:12:01.454  "abort": true,
00:12:01.454  "seek_hole": false,
00:12:01.454  "seek_data": false,
00:12:01.454  "copy": true,
00:12:01.454  "nvme_iov_md": false
00:12:01.454  },
00:12:01.454  "memory_domains": [
00:12:01.454  {
00:12:01.454  "dma_device_id": "system",
00:12:01.454  "dma_device_type": 1
00:12:01.454  },
00:12:01.454  {
00:12:01.454  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:01.454  "dma_device_type": 2
00:12:01.454  }
00:12:01.454  ],
00:12:01.454  "driver_specific": {}
00:12:01.454  }
00:12:01.454  ]
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:01.454    11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:01.454    11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:01.454    11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.454    11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.454    11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:01.454    "name": "Existed_Raid",
00:12:01.454    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.454    "strip_size_kb": 0,
00:12:01.454    "state": "configuring",
00:12:01.454    "raid_level": "raid1",
00:12:01.454    "superblock": false,
00:12:01.454    "num_base_bdevs": 4,
00:12:01.454    "num_base_bdevs_discovered": 1,
00:12:01.454    "num_base_bdevs_operational": 4,
00:12:01.454    "base_bdevs_list": [
00:12:01.454      {
00:12:01.454        "name": "BaseBdev1",
00:12:01.454        "uuid": "058e009a-73ce-4de4-9f11-63600aceb3dd",
00:12:01.454        "is_configured": true,
00:12:01.454        "data_offset": 0,
00:12:01.454        "data_size": 65536
00:12:01.454      },
00:12:01.454      {
00:12:01.454        "name": "BaseBdev2",
00:12:01.454        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.454        "is_configured": false,
00:12:01.454        "data_offset": 0,
00:12:01.454        "data_size": 0
00:12:01.454      },
00:12:01.454      {
00:12:01.454        "name": "BaseBdev3",
00:12:01.454        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.454        "is_configured": false,
00:12:01.454        "data_offset": 0,
00:12:01.454        "data_size": 0
00:12:01.454      },
00:12:01.454      {
00:12:01.454        "name": "BaseBdev4",
00:12:01.454        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.454        "is_configured": false,
00:12:01.454        "data_offset": 0,
00:12:01.454        "data_size": 0
00:12:01.454      }
00:12:01.454    ]
00:12:01.454  }'
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:01.454   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.713   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:12:01.713   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.713   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.713  [2024-12-16 11:33:27.744225] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:12:01.713  [2024-12-16 11:33:27.744301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:12:01.713   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.713   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:01.713   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.713   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.714  [2024-12-16 11:33:27.752258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:01.714  [2024-12-16 11:33:27.754494] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:12:01.714  [2024-12-16 11:33:27.754558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:12:01.714  [2024-12-16 11:33:27.754570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:12:01.714  [2024-12-16 11:33:27.754580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:12:01.714  [2024-12-16 11:33:27.754604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:12:01.714  [2024-12-16 11:33:27.754614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:01.714   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:01.714    11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:01.714    11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:01.714    11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:01.714    11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:01.972    11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:01.972   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:01.972    "name": "Existed_Raid",
00:12:01.972    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.972    "strip_size_kb": 0,
00:12:01.972    "state": "configuring",
00:12:01.972    "raid_level": "raid1",
00:12:01.972    "superblock": false,
00:12:01.972    "num_base_bdevs": 4,
00:12:01.972    "num_base_bdevs_discovered": 1,
00:12:01.972    "num_base_bdevs_operational": 4,
00:12:01.972    "base_bdevs_list": [
00:12:01.972      {
00:12:01.972        "name": "BaseBdev1",
00:12:01.972        "uuid": "058e009a-73ce-4de4-9f11-63600aceb3dd",
00:12:01.972        "is_configured": true,
00:12:01.972        "data_offset": 0,
00:12:01.972        "data_size": 65536
00:12:01.972      },
00:12:01.972      {
00:12:01.972        "name": "BaseBdev2",
00:12:01.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.972        "is_configured": false,
00:12:01.972        "data_offset": 0,
00:12:01.972        "data_size": 0
00:12:01.972      },
00:12:01.972      {
00:12:01.972        "name": "BaseBdev3",
00:12:01.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.972        "is_configured": false,
00:12:01.972        "data_offset": 0,
00:12:01.972        "data_size": 0
00:12:01.972      },
00:12:01.972      {
00:12:01.972        "name": "BaseBdev4",
00:12:01.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:01.972        "is_configured": false,
00:12:01.972        "data_offset": 0,
00:12:01.972        "data_size": 0
00:12:01.972      }
00:12:01.972    ]
00:12:01.972  }'
00:12:01.972   11:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:01.972   11:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.231  [2024-12-16 11:33:28.172115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:02.231  BaseBdev2
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.231   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.231  [
00:12:02.231  {
00:12:02.231  "name": "BaseBdev2",
00:12:02.231  "aliases": [
00:12:02.231  "99438478-0458-4beb-a58d-0c83d749e781"
00:12:02.231  ],
00:12:02.231  "product_name": "Malloc disk",
00:12:02.231  "block_size": 512,
00:12:02.231  "num_blocks": 65536,
00:12:02.231  "uuid": "99438478-0458-4beb-a58d-0c83d749e781",
00:12:02.231  "assigned_rate_limits": {
00:12:02.231  "rw_ios_per_sec": 0,
00:12:02.231  "rw_mbytes_per_sec": 0,
00:12:02.231  "r_mbytes_per_sec": 0,
00:12:02.231  "w_mbytes_per_sec": 0
00:12:02.231  },
00:12:02.231  "claimed": true,
00:12:02.231  "claim_type": "exclusive_write",
00:12:02.231  "zoned": false,
00:12:02.231  "supported_io_types": {
00:12:02.231  "read": true,
00:12:02.231  "write": true,
00:12:02.231  "unmap": true,
00:12:02.231  "flush": true,
00:12:02.231  "reset": true,
00:12:02.231  "nvme_admin": false,
00:12:02.231  "nvme_io": false,
00:12:02.231  "nvme_io_md": false,
00:12:02.231  "write_zeroes": true,
00:12:02.231  "zcopy": true,
00:12:02.231  "get_zone_info": false,
00:12:02.231  "zone_management": false,
00:12:02.231  "zone_append": false,
00:12:02.231  "compare": false,
00:12:02.231  "compare_and_write": false,
00:12:02.231  "abort": true,
00:12:02.231  "seek_hole": false,
00:12:02.232  "seek_data": false,
00:12:02.232  "copy": true,
00:12:02.232  "nvme_iov_md": false
00:12:02.232  },
00:12:02.232  "memory_domains": [
00:12:02.232  {
00:12:02.232  "dma_device_id": "system",
00:12:02.232  "dma_device_type": 1
00:12:02.232  },
00:12:02.232  {
00:12:02.232  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:02.232  "dma_device_type": 2
00:12:02.232  }
00:12:02.232  ],
00:12:02.232  "driver_specific": {}
00:12:02.232  }
00:12:02.232  ]
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:02.232    11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:02.232    11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:02.232    11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.232    11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.232    11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:02.232    "name": "Existed_Raid",
00:12:02.232    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:02.232    "strip_size_kb": 0,
00:12:02.232    "state": "configuring",
00:12:02.232    "raid_level": "raid1",
00:12:02.232    "superblock": false,
00:12:02.232    "num_base_bdevs": 4,
00:12:02.232    "num_base_bdevs_discovered": 2,
00:12:02.232    "num_base_bdevs_operational": 4,
00:12:02.232    "base_bdevs_list": [
00:12:02.232      {
00:12:02.232        "name": "BaseBdev1",
00:12:02.232        "uuid": "058e009a-73ce-4de4-9f11-63600aceb3dd",
00:12:02.232        "is_configured": true,
00:12:02.232        "data_offset": 0,
00:12:02.232        "data_size": 65536
00:12:02.232      },
00:12:02.232      {
00:12:02.232        "name": "BaseBdev2",
00:12:02.232        "uuid": "99438478-0458-4beb-a58d-0c83d749e781",
00:12:02.232        "is_configured": true,
00:12:02.232        "data_offset": 0,
00:12:02.232        "data_size": 65536
00:12:02.232      },
00:12:02.232      {
00:12:02.232        "name": "BaseBdev3",
00:12:02.232        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:02.232        "is_configured": false,
00:12:02.232        "data_offset": 0,
00:12:02.232        "data_size": 0
00:12:02.232      },
00:12:02.232      {
00:12:02.232        "name": "BaseBdev4",
00:12:02.232        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:02.232        "is_configured": false,
00:12:02.232        "data_offset": 0,
00:12:02.232        "data_size": 0
00:12:02.232      }
00:12:02.232    ]
00:12:02.232  }'
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:02.232   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.803  [2024-12-16 11:33:28.662883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:02.803  BaseBdev3
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.803  [
00:12:02.803  {
00:12:02.803  "name": "BaseBdev3",
00:12:02.803  "aliases": [
00:12:02.803  "138d35c5-3541-4b45-b657-6b6e8bf9afea"
00:12:02.803  ],
00:12:02.803  "product_name": "Malloc disk",
00:12:02.803  "block_size": 512,
00:12:02.803  "num_blocks": 65536,
00:12:02.803  "uuid": "138d35c5-3541-4b45-b657-6b6e8bf9afea",
00:12:02.803  "assigned_rate_limits": {
00:12:02.803  "rw_ios_per_sec": 0,
00:12:02.803  "rw_mbytes_per_sec": 0,
00:12:02.803  "r_mbytes_per_sec": 0,
00:12:02.803  "w_mbytes_per_sec": 0
00:12:02.803  },
00:12:02.803  "claimed": true,
00:12:02.803  "claim_type": "exclusive_write",
00:12:02.803  "zoned": false,
00:12:02.803  "supported_io_types": {
00:12:02.803  "read": true,
00:12:02.803  "write": true,
00:12:02.803  "unmap": true,
00:12:02.803  "flush": true,
00:12:02.803  "reset": true,
00:12:02.803  "nvme_admin": false,
00:12:02.803  "nvme_io": false,
00:12:02.803  "nvme_io_md": false,
00:12:02.803  "write_zeroes": true,
00:12:02.803  "zcopy": true,
00:12:02.803  "get_zone_info": false,
00:12:02.803  "zone_management": false,
00:12:02.803  "zone_append": false,
00:12:02.803  "compare": false,
00:12:02.803  "compare_and_write": false,
00:12:02.803  "abort": true,
00:12:02.803  "seek_hole": false,
00:12:02.803  "seek_data": false,
00:12:02.803  "copy": true,
00:12:02.803  "nvme_iov_md": false
00:12:02.803  },
00:12:02.803  "memory_domains": [
00:12:02.803  {
00:12:02.803  "dma_device_id": "system",
00:12:02.803  "dma_device_type": 1
00:12:02.803  },
00:12:02.803  {
00:12:02.803  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:02.803  "dma_device_type": 2
00:12:02.803  }
00:12:02.803  ],
00:12:02.803  "driver_specific": {}
00:12:02.803  }
00:12:02.803  ]
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:02.803    11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:02.803    11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:02.803    11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:02.803    11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:02.803    11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:02.803    "name": "Existed_Raid",
00:12:02.803    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:02.803    "strip_size_kb": 0,
00:12:02.803    "state": "configuring",
00:12:02.803    "raid_level": "raid1",
00:12:02.803    "superblock": false,
00:12:02.803    "num_base_bdevs": 4,
00:12:02.803    "num_base_bdevs_discovered": 3,
00:12:02.803    "num_base_bdevs_operational": 4,
00:12:02.803    "base_bdevs_list": [
00:12:02.803      {
00:12:02.803        "name": "BaseBdev1",
00:12:02.803        "uuid": "058e009a-73ce-4de4-9f11-63600aceb3dd",
00:12:02.803        "is_configured": true,
00:12:02.803        "data_offset": 0,
00:12:02.803        "data_size": 65536
00:12:02.803      },
00:12:02.803      {
00:12:02.803        "name": "BaseBdev2",
00:12:02.803        "uuid": "99438478-0458-4beb-a58d-0c83d749e781",
00:12:02.803        "is_configured": true,
00:12:02.803        "data_offset": 0,
00:12:02.803        "data_size": 65536
00:12:02.803      },
00:12:02.803      {
00:12:02.803        "name": "BaseBdev3",
00:12:02.803        "uuid": "138d35c5-3541-4b45-b657-6b6e8bf9afea",
00:12:02.803        "is_configured": true,
00:12:02.803        "data_offset": 0,
00:12:02.803        "data_size": 65536
00:12:02.803      },
00:12:02.803      {
00:12:02.803        "name": "BaseBdev4",
00:12:02.803        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:02.803        "is_configured": false,
00:12:02.803        "data_offset": 0,
00:12:02.803        "data_size": 0
00:12:02.803      }
00:12:02.803    ]
00:12:02.803  }'
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:02.803   11:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.372  [2024-12-16 11:33:29.209373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:12:03.372  [2024-12-16 11:33:29.209434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:12:03.372  [2024-12-16 11:33:29.209445] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:12:03.372  [2024-12-16 11:33:29.209816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:12:03.372  [2024-12-16 11:33:29.210001] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:12:03.372  [2024-12-16 11:33:29.210024] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:12:03.372  [2024-12-16 11:33:29.210265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:03.372  BaseBdev4
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.372  [
00:12:03.372  {
00:12:03.372  "name": "BaseBdev4",
00:12:03.372  "aliases": [
00:12:03.372  "e3ffeceb-cee8-48b8-8585-96eceb2f03d3"
00:12:03.372  ],
00:12:03.372  "product_name": "Malloc disk",
00:12:03.372  "block_size": 512,
00:12:03.372  "num_blocks": 65536,
00:12:03.372  "uuid": "e3ffeceb-cee8-48b8-8585-96eceb2f03d3",
00:12:03.372  "assigned_rate_limits": {
00:12:03.372  "rw_ios_per_sec": 0,
00:12:03.372  "rw_mbytes_per_sec": 0,
00:12:03.372  "r_mbytes_per_sec": 0,
00:12:03.372  "w_mbytes_per_sec": 0
00:12:03.372  },
00:12:03.372  "claimed": true,
00:12:03.372  "claim_type": "exclusive_write",
00:12:03.372  "zoned": false,
00:12:03.372  "supported_io_types": {
00:12:03.372  "read": true,
00:12:03.372  "write": true,
00:12:03.372  "unmap": true,
00:12:03.372  "flush": true,
00:12:03.372  "reset": true,
00:12:03.372  "nvme_admin": false,
00:12:03.372  "nvme_io": false,
00:12:03.372  "nvme_io_md": false,
00:12:03.372  "write_zeroes": true,
00:12:03.372  "zcopy": true,
00:12:03.372  "get_zone_info": false,
00:12:03.372  "zone_management": false,
00:12:03.372  "zone_append": false,
00:12:03.372  "compare": false,
00:12:03.372  "compare_and_write": false,
00:12:03.372  "abort": true,
00:12:03.372  "seek_hole": false,
00:12:03.372  "seek_data": false,
00:12:03.372  "copy": true,
00:12:03.372  "nvme_iov_md": false
00:12:03.372  },
00:12:03.372  "memory_domains": [
00:12:03.372  {
00:12:03.372  "dma_device_id": "system",
00:12:03.372  "dma_device_type": 1
00:12:03.372  },
00:12:03.372  {
00:12:03.372  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:03.372  "dma_device_type": 2
00:12:03.372  }
00:12:03.372  ],
00:12:03.372  "driver_specific": {}
00:12:03.372  }
00:12:03.372  ]
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:03.372    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:03.372    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.372    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.372    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:03.372    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.372   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:03.372    "name": "Existed_Raid",
00:12:03.372    "uuid": "6c717c82-0976-4f4d-bb1c-ce6e5fba0bfc",
00:12:03.372    "strip_size_kb": 0,
00:12:03.372    "state": "online",
00:12:03.372    "raid_level": "raid1",
00:12:03.372    "superblock": false,
00:12:03.372    "num_base_bdevs": 4,
00:12:03.372    "num_base_bdevs_discovered": 4,
00:12:03.372    "num_base_bdevs_operational": 4,
00:12:03.372    "base_bdevs_list": [
00:12:03.372      {
00:12:03.372        "name": "BaseBdev1",
00:12:03.372        "uuid": "058e009a-73ce-4de4-9f11-63600aceb3dd",
00:12:03.372        "is_configured": true,
00:12:03.372        "data_offset": 0,
00:12:03.372        "data_size": 65536
00:12:03.372      },
00:12:03.372      {
00:12:03.372        "name": "BaseBdev2",
00:12:03.372        "uuid": "99438478-0458-4beb-a58d-0c83d749e781",
00:12:03.373        "is_configured": true,
00:12:03.373        "data_offset": 0,
00:12:03.373        "data_size": 65536
00:12:03.373      },
00:12:03.373      {
00:12:03.373        "name": "BaseBdev3",
00:12:03.373        "uuid": "138d35c5-3541-4b45-b657-6b6e8bf9afea",
00:12:03.373        "is_configured": true,
00:12:03.373        "data_offset": 0,
00:12:03.373        "data_size": 65536
00:12:03.373      },
00:12:03.373      {
00:12:03.373        "name": "BaseBdev4",
00:12:03.373        "uuid": "e3ffeceb-cee8-48b8-8585-96eceb2f03d3",
00:12:03.373        "is_configured": true,
00:12:03.373        "data_offset": 0,
00:12:03.373        "data_size": 65536
00:12:03.373      }
00:12:03.373    ]
00:12:03.373  }'
00:12:03.373   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:03.373   11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.942  [2024-12-16 11:33:29.740970] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:12:03.942    "name": "Existed_Raid",
00:12:03.942    "aliases": [
00:12:03.942      "6c717c82-0976-4f4d-bb1c-ce6e5fba0bfc"
00:12:03.942    ],
00:12:03.942    "product_name": "Raid Volume",
00:12:03.942    "block_size": 512,
00:12:03.942    "num_blocks": 65536,
00:12:03.942    "uuid": "6c717c82-0976-4f4d-bb1c-ce6e5fba0bfc",
00:12:03.942    "assigned_rate_limits": {
00:12:03.942      "rw_ios_per_sec": 0,
00:12:03.942      "rw_mbytes_per_sec": 0,
00:12:03.942      "r_mbytes_per_sec": 0,
00:12:03.942      "w_mbytes_per_sec": 0
00:12:03.942    },
00:12:03.942    "claimed": false,
00:12:03.942    "zoned": false,
00:12:03.942    "supported_io_types": {
00:12:03.942      "read": true,
00:12:03.942      "write": true,
00:12:03.942      "unmap": false,
00:12:03.942      "flush": false,
00:12:03.942      "reset": true,
00:12:03.942      "nvme_admin": false,
00:12:03.942      "nvme_io": false,
00:12:03.942      "nvme_io_md": false,
00:12:03.942      "write_zeroes": true,
00:12:03.942      "zcopy": false,
00:12:03.942      "get_zone_info": false,
00:12:03.942      "zone_management": false,
00:12:03.942      "zone_append": false,
00:12:03.942      "compare": false,
00:12:03.942      "compare_and_write": false,
00:12:03.942      "abort": false,
00:12:03.942      "seek_hole": false,
00:12:03.942      "seek_data": false,
00:12:03.942      "copy": false,
00:12:03.942      "nvme_iov_md": false
00:12:03.942    },
00:12:03.942    "memory_domains": [
00:12:03.942      {
00:12:03.942        "dma_device_id": "system",
00:12:03.942        "dma_device_type": 1
00:12:03.942      },
00:12:03.942      {
00:12:03.942        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:03.942        "dma_device_type": 2
00:12:03.942      },
00:12:03.942      {
00:12:03.942        "dma_device_id": "system",
00:12:03.942        "dma_device_type": 1
00:12:03.942      },
00:12:03.942      {
00:12:03.942        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:03.942        "dma_device_type": 2
00:12:03.942      },
00:12:03.942      {
00:12:03.942        "dma_device_id": "system",
00:12:03.942        "dma_device_type": 1
00:12:03.942      },
00:12:03.942      {
00:12:03.942        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:03.942        "dma_device_type": 2
00:12:03.942      },
00:12:03.942      {
00:12:03.942        "dma_device_id": "system",
00:12:03.942        "dma_device_type": 1
00:12:03.942      },
00:12:03.942      {
00:12:03.942        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:03.942        "dma_device_type": 2
00:12:03.942      }
00:12:03.942    ],
00:12:03.942    "driver_specific": {
00:12:03.942      "raid": {
00:12:03.942        "uuid": "6c717c82-0976-4f4d-bb1c-ce6e5fba0bfc",
00:12:03.942        "strip_size_kb": 0,
00:12:03.942        "state": "online",
00:12:03.942        "raid_level": "raid1",
00:12:03.942        "superblock": false,
00:12:03.942        "num_base_bdevs": 4,
00:12:03.942        "num_base_bdevs_discovered": 4,
00:12:03.942        "num_base_bdevs_operational": 4,
00:12:03.942        "base_bdevs_list": [
00:12:03.942          {
00:12:03.942            "name": "BaseBdev1",
00:12:03.942            "uuid": "058e009a-73ce-4de4-9f11-63600aceb3dd",
00:12:03.942            "is_configured": true,
00:12:03.942            "data_offset": 0,
00:12:03.942            "data_size": 65536
00:12:03.942          },
00:12:03.942          {
00:12:03.942            "name": "BaseBdev2",
00:12:03.942            "uuid": "99438478-0458-4beb-a58d-0c83d749e781",
00:12:03.942            "is_configured": true,
00:12:03.942            "data_offset": 0,
00:12:03.942            "data_size": 65536
00:12:03.942          },
00:12:03.942          {
00:12:03.942            "name": "BaseBdev3",
00:12:03.942            "uuid": "138d35c5-3541-4b45-b657-6b6e8bf9afea",
00:12:03.942            "is_configured": true,
00:12:03.942            "data_offset": 0,
00:12:03.942            "data_size": 65536
00:12:03.942          },
00:12:03.942          {
00:12:03.942            "name": "BaseBdev4",
00:12:03.942            "uuid": "e3ffeceb-cee8-48b8-8585-96eceb2f03d3",
00:12:03.942            "is_configured": true,
00:12:03.942            "data_offset": 0,
00:12:03.942            "data_size": 65536
00:12:03.942          }
00:12:03.942        ]
00:12:03.942      }
00:12:03.942    }
00:12:03.942  }'
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:12:03.942  BaseBdev2
00:12:03.942  BaseBdev3
00:12:03.942  BaseBdev4'
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:03.942   11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:03.942    11:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.201    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.201  [2024-12-16 11:33:30.036096] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:04.201    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:04.201    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.201    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.201    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:04.201    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:04.201    "name": "Existed_Raid",
00:12:04.201    "uuid": "6c717c82-0976-4f4d-bb1c-ce6e5fba0bfc",
00:12:04.201    "strip_size_kb": 0,
00:12:04.201    "state": "online",
00:12:04.201    "raid_level": "raid1",
00:12:04.201    "superblock": false,
00:12:04.201    "num_base_bdevs": 4,
00:12:04.201    "num_base_bdevs_discovered": 3,
00:12:04.201    "num_base_bdevs_operational": 3,
00:12:04.201    "base_bdevs_list": [
00:12:04.201      {
00:12:04.201        "name": null,
00:12:04.201        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:04.201        "is_configured": false,
00:12:04.201        "data_offset": 0,
00:12:04.201        "data_size": 65536
00:12:04.201      },
00:12:04.201      {
00:12:04.201        "name": "BaseBdev2",
00:12:04.201        "uuid": "99438478-0458-4beb-a58d-0c83d749e781",
00:12:04.201        "is_configured": true,
00:12:04.201        "data_offset": 0,
00:12:04.201        "data_size": 65536
00:12:04.201      },
00:12:04.201      {
00:12:04.201        "name": "BaseBdev3",
00:12:04.201        "uuid": "138d35c5-3541-4b45-b657-6b6e8bf9afea",
00:12:04.201        "is_configured": true,
00:12:04.201        "data_offset": 0,
00:12:04.201        "data_size": 65536
00:12:04.201      },
00:12:04.201      {
00:12:04.201        "name": "BaseBdev4",
00:12:04.201        "uuid": "e3ffeceb-cee8-48b8-8585-96eceb2f03d3",
00:12:04.201        "is_configured": true,
00:12:04.201        "data_offset": 0,
00:12:04.201        "data_size": 65536
00:12:04.201      }
00:12:04.201    ]
00:12:04.201  }'
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:04.201   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.460   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:12:04.460   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:04.460    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:04.460    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.460    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.460    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:12:04.460    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719  [2024-12-16 11:33:30.538994] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719  [2024-12-16 11:33:30.610838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719  [2024-12-16 11:33:30.682434] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:12:04.719  [2024-12-16 11:33:30.682531] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:04.719  [2024-12-16 11:33:30.694657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:04.719  [2024-12-16 11:33:30.694721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:04.719  [2024-12-16 11:33:30.694734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:12:04.719    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719  BaseBdev2
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.719   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.719  [
00:12:04.719  {
00:12:04.719  "name": "BaseBdev2",
00:12:04.978  "aliases": [
00:12:04.978  "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1"
00:12:04.978  ],
00:12:04.978  "product_name": "Malloc disk",
00:12:04.978  "block_size": 512,
00:12:04.978  "num_blocks": 65536,
00:12:04.978  "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:04.978  "assigned_rate_limits": {
00:12:04.978  "rw_ios_per_sec": 0,
00:12:04.978  "rw_mbytes_per_sec": 0,
00:12:04.978  "r_mbytes_per_sec": 0,
00:12:04.978  "w_mbytes_per_sec": 0
00:12:04.978  },
00:12:04.978  "claimed": false,
00:12:04.978  "zoned": false,
00:12:04.978  "supported_io_types": {
00:12:04.978  "read": true,
00:12:04.978  "write": true,
00:12:04.978  "unmap": true,
00:12:04.978  "flush": true,
00:12:04.978  "reset": true,
00:12:04.978  "nvme_admin": false,
00:12:04.978  "nvme_io": false,
00:12:04.978  "nvme_io_md": false,
00:12:04.978  "write_zeroes": true,
00:12:04.978  "zcopy": true,
00:12:04.978  "get_zone_info": false,
00:12:04.978  "zone_management": false,
00:12:04.978  "zone_append": false,
00:12:04.978  "compare": false,
00:12:04.978  "compare_and_write": false,
00:12:04.978  "abort": true,
00:12:04.978  "seek_hole": false,
00:12:04.978  "seek_data": false,
00:12:04.978  "copy": true,
00:12:04.978  "nvme_iov_md": false
00:12:04.978  },
00:12:04.978  "memory_domains": [
00:12:04.978  {
00:12:04.978  "dma_device_id": "system",
00:12:04.978  "dma_device_type": 1
00:12:04.978  },
00:12:04.978  {
00:12:04.978  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:04.978  "dma_device_type": 2
00:12:04.978  }
00:12:04.978  ],
00:12:04.978  "driver_specific": {}
00:12:04.978  }
00:12:04.978  ]
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.978  BaseBdev3
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.978   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.978  [
00:12:04.978  {
00:12:04.978  "name": "BaseBdev3",
00:12:04.979  "aliases": [
00:12:04.979  "51d5ddbc-5390-43cf-a376-61a1c77f4b4b"
00:12:04.979  ],
00:12:04.979  "product_name": "Malloc disk",
00:12:04.979  "block_size": 512,
00:12:04.979  "num_blocks": 65536,
00:12:04.979  "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:04.979  "assigned_rate_limits": {
00:12:04.979  "rw_ios_per_sec": 0,
00:12:04.979  "rw_mbytes_per_sec": 0,
00:12:04.979  "r_mbytes_per_sec": 0,
00:12:04.979  "w_mbytes_per_sec": 0
00:12:04.979  },
00:12:04.979  "claimed": false,
00:12:04.979  "zoned": false,
00:12:04.979  "supported_io_types": {
00:12:04.979  "read": true,
00:12:04.979  "write": true,
00:12:04.979  "unmap": true,
00:12:04.979  "flush": true,
00:12:04.979  "reset": true,
00:12:04.979  "nvme_admin": false,
00:12:04.979  "nvme_io": false,
00:12:04.979  "nvme_io_md": false,
00:12:04.979  "write_zeroes": true,
00:12:04.979  "zcopy": true,
00:12:04.979  "get_zone_info": false,
00:12:04.979  "zone_management": false,
00:12:04.979  "zone_append": false,
00:12:04.979  "compare": false,
00:12:04.979  "compare_and_write": false,
00:12:04.979  "abort": true,
00:12:04.979  "seek_hole": false,
00:12:04.979  "seek_data": false,
00:12:04.979  "copy": true,
00:12:04.979  "nvme_iov_md": false
00:12:04.979  },
00:12:04.979  "memory_domains": [
00:12:04.979  {
00:12:04.979  "dma_device_id": "system",
00:12:04.979  "dma_device_type": 1
00:12:04.979  },
00:12:04.979  {
00:12:04.979  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:04.979  "dma_device_type": 2
00:12:04.979  }
00:12:04.979  ],
00:12:04.979  "driver_specific": {}
00:12:04.979  }
00:12:04.979  ]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.979  BaseBdev4
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.979  [
00:12:04.979  {
00:12:04.979  "name": "BaseBdev4",
00:12:04.979  "aliases": [
00:12:04.979  "1fb57dc4-aaca-4a73-9d59-773fe3d87b46"
00:12:04.979  ],
00:12:04.979  "product_name": "Malloc disk",
00:12:04.979  "block_size": 512,
00:12:04.979  "num_blocks": 65536,
00:12:04.979  "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:04.979  "assigned_rate_limits": {
00:12:04.979  "rw_ios_per_sec": 0,
00:12:04.979  "rw_mbytes_per_sec": 0,
00:12:04.979  "r_mbytes_per_sec": 0,
00:12:04.979  "w_mbytes_per_sec": 0
00:12:04.979  },
00:12:04.979  "claimed": false,
00:12:04.979  "zoned": false,
00:12:04.979  "supported_io_types": {
00:12:04.979  "read": true,
00:12:04.979  "write": true,
00:12:04.979  "unmap": true,
00:12:04.979  "flush": true,
00:12:04.979  "reset": true,
00:12:04.979  "nvme_admin": false,
00:12:04.979  "nvme_io": false,
00:12:04.979  "nvme_io_md": false,
00:12:04.979  "write_zeroes": true,
00:12:04.979  "zcopy": true,
00:12:04.979  "get_zone_info": false,
00:12:04.979  "zone_management": false,
00:12:04.979  "zone_append": false,
00:12:04.979  "compare": false,
00:12:04.979  "compare_and_write": false,
00:12:04.979  "abort": true,
00:12:04.979  "seek_hole": false,
00:12:04.979  "seek_data": false,
00:12:04.979  "copy": true,
00:12:04.979  "nvme_iov_md": false
00:12:04.979  },
00:12:04.979  "memory_domains": [
00:12:04.979  {
00:12:04.979  "dma_device_id": "system",
00:12:04.979  "dma_device_type": 1
00:12:04.979  },
00:12:04.979  {
00:12:04.979  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:04.979  "dma_device_type": 2
00:12:04.979  }
00:12:04.979  ],
00:12:04.979  "driver_specific": {}
00:12:04.979  }
00:12:04.979  ]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.979  [2024-12-16 11:33:30.913661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:12:04.979  [2024-12-16 11:33:30.913763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:12:04.979  [2024-12-16 11:33:30.913792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:04.979  [2024-12-16 11:33:30.915932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:04.979  [2024-12-16 11:33:30.915984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:04.979    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:04.979    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:04.979    11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:04.979    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:04.979    11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:04.979    "name": "Existed_Raid",
00:12:04.979    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:04.979    "strip_size_kb": 0,
00:12:04.979    "state": "configuring",
00:12:04.979    "raid_level": "raid1",
00:12:04.979    "superblock": false,
00:12:04.979    "num_base_bdevs": 4,
00:12:04.979    "num_base_bdevs_discovered": 3,
00:12:04.979    "num_base_bdevs_operational": 4,
00:12:04.979    "base_bdevs_list": [
00:12:04.979      {
00:12:04.979        "name": "BaseBdev1",
00:12:04.979        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:04.979        "is_configured": false,
00:12:04.979        "data_offset": 0,
00:12:04.979        "data_size": 0
00:12:04.979      },
00:12:04.979      {
00:12:04.979        "name": "BaseBdev2",
00:12:04.979        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:04.979        "is_configured": true,
00:12:04.979        "data_offset": 0,
00:12:04.979        "data_size": 65536
00:12:04.979      },
00:12:04.979      {
00:12:04.979        "name": "BaseBdev3",
00:12:04.979        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:04.979        "is_configured": true,
00:12:04.979        "data_offset": 0,
00:12:04.979        "data_size": 65536
00:12:04.979      },
00:12:04.979      {
00:12:04.979        "name": "BaseBdev4",
00:12:04.979        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:04.979        "is_configured": true,
00:12:04.979        "data_offset": 0,
00:12:04.979        "data_size": 65536
00:12:04.979      }
00:12:04.979    ]
00:12:04.979  }'
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:04.979   11:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:05.548  [2024-12-16 11:33:31.376869] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:05.548    11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:05.548    11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:05.548    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:05.548    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:05.548    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:05.548    "name": "Existed_Raid",
00:12:05.548    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:05.548    "strip_size_kb": 0,
00:12:05.548    "state": "configuring",
00:12:05.548    "raid_level": "raid1",
00:12:05.548    "superblock": false,
00:12:05.548    "num_base_bdevs": 4,
00:12:05.548    "num_base_bdevs_discovered": 2,
00:12:05.548    "num_base_bdevs_operational": 4,
00:12:05.548    "base_bdevs_list": [
00:12:05.548      {
00:12:05.548        "name": "BaseBdev1",
00:12:05.548        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:05.548        "is_configured": false,
00:12:05.548        "data_offset": 0,
00:12:05.548        "data_size": 0
00:12:05.548      },
00:12:05.548      {
00:12:05.548        "name": null,
00:12:05.548        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:05.548        "is_configured": false,
00:12:05.548        "data_offset": 0,
00:12:05.548        "data_size": 65536
00:12:05.548      },
00:12:05.548      {
00:12:05.548        "name": "BaseBdev3",
00:12:05.548        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:05.548        "is_configured": true,
00:12:05.548        "data_offset": 0,
00:12:05.548        "data_size": 65536
00:12:05.548      },
00:12:05.548      {
00:12:05.548        "name": "BaseBdev4",
00:12:05.548        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:05.548        "is_configured": true,
00:12:05.548        "data_offset": 0,
00:12:05.548        "data_size": 65536
00:12:05.548      }
00:12:05.548    ]
00:12:05.548  }'
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:05.548   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:05.808    11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:05.808    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:05.808    11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:12:05.808    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:05.808    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.068  [2024-12-16 11:33:31.895308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:06.068  BaseBdev1
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.068  [
00:12:06.068  {
00:12:06.068  "name": "BaseBdev1",
00:12:06.068  "aliases": [
00:12:06.068  "6486186a-73a5-4e74-a3f0-281638ed78f7"
00:12:06.068  ],
00:12:06.068  "product_name": "Malloc disk",
00:12:06.068  "block_size": 512,
00:12:06.068  "num_blocks": 65536,
00:12:06.068  "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:06.068  "assigned_rate_limits": {
00:12:06.068  "rw_ios_per_sec": 0,
00:12:06.068  "rw_mbytes_per_sec": 0,
00:12:06.068  "r_mbytes_per_sec": 0,
00:12:06.068  "w_mbytes_per_sec": 0
00:12:06.068  },
00:12:06.068  "claimed": true,
00:12:06.068  "claim_type": "exclusive_write",
00:12:06.068  "zoned": false,
00:12:06.068  "supported_io_types": {
00:12:06.068  "read": true,
00:12:06.068  "write": true,
00:12:06.068  "unmap": true,
00:12:06.068  "flush": true,
00:12:06.068  "reset": true,
00:12:06.068  "nvme_admin": false,
00:12:06.068  "nvme_io": false,
00:12:06.068  "nvme_io_md": false,
00:12:06.068  "write_zeroes": true,
00:12:06.068  "zcopy": true,
00:12:06.068  "get_zone_info": false,
00:12:06.068  "zone_management": false,
00:12:06.068  "zone_append": false,
00:12:06.068  "compare": false,
00:12:06.068  "compare_and_write": false,
00:12:06.068  "abort": true,
00:12:06.068  "seek_hole": false,
00:12:06.068  "seek_data": false,
00:12:06.068  "copy": true,
00:12:06.068  "nvme_iov_md": false
00:12:06.068  },
00:12:06.068  "memory_domains": [
00:12:06.068  {
00:12:06.068  "dma_device_id": "system",
00:12:06.068  "dma_device_type": 1
00:12:06.068  },
00:12:06.068  {
00:12:06.068  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:06.068  "dma_device_type": 2
00:12:06.068  }
00:12:06.068  ],
00:12:06.068  "driver_specific": {}
00:12:06.068  }
00:12:06.068  ]
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:06.068   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:06.068    11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:06.068    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.068    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.069    11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:06.069    11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.069   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:06.069    "name": "Existed_Raid",
00:12:06.069    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:06.069    "strip_size_kb": 0,
00:12:06.069    "state": "configuring",
00:12:06.069    "raid_level": "raid1",
00:12:06.069    "superblock": false,
00:12:06.069    "num_base_bdevs": 4,
00:12:06.069    "num_base_bdevs_discovered": 3,
00:12:06.069    "num_base_bdevs_operational": 4,
00:12:06.069    "base_bdevs_list": [
00:12:06.069      {
00:12:06.069        "name": "BaseBdev1",
00:12:06.069        "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:06.069        "is_configured": true,
00:12:06.069        "data_offset": 0,
00:12:06.069        "data_size": 65536
00:12:06.069      },
00:12:06.069      {
00:12:06.069        "name": null,
00:12:06.069        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:06.069        "is_configured": false,
00:12:06.069        "data_offset": 0,
00:12:06.069        "data_size": 65536
00:12:06.069      },
00:12:06.069      {
00:12:06.069        "name": "BaseBdev3",
00:12:06.069        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:06.069        "is_configured": true,
00:12:06.069        "data_offset": 0,
00:12:06.069        "data_size": 65536
00:12:06.069      },
00:12:06.069      {
00:12:06.069        "name": "BaseBdev4",
00:12:06.069        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:06.069        "is_configured": true,
00:12:06.069        "data_offset": 0,
00:12:06.069        "data_size": 65536
00:12:06.069      }
00:12:06.069    ]
00:12:06.069  }'
00:12:06.069   11:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:06.069   11:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.328    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:12:06.328    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:06.328    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.328    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.328    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.328   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:12:06.328   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:12:06.328   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.328   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.587  [2024-12-16 11:33:32.394574] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:06.587    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:06.587    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:06.587    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.587    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.587    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:06.587    "name": "Existed_Raid",
00:12:06.587    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:06.587    "strip_size_kb": 0,
00:12:06.587    "state": "configuring",
00:12:06.587    "raid_level": "raid1",
00:12:06.587    "superblock": false,
00:12:06.587    "num_base_bdevs": 4,
00:12:06.587    "num_base_bdevs_discovered": 2,
00:12:06.587    "num_base_bdevs_operational": 4,
00:12:06.587    "base_bdevs_list": [
00:12:06.587      {
00:12:06.587        "name": "BaseBdev1",
00:12:06.587        "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:06.587        "is_configured": true,
00:12:06.587        "data_offset": 0,
00:12:06.587        "data_size": 65536
00:12:06.587      },
00:12:06.587      {
00:12:06.587        "name": null,
00:12:06.587        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:06.587        "is_configured": false,
00:12:06.587        "data_offset": 0,
00:12:06.587        "data_size": 65536
00:12:06.587      },
00:12:06.587      {
00:12:06.587        "name": null,
00:12:06.587        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:06.587        "is_configured": false,
00:12:06.587        "data_offset": 0,
00:12:06.587        "data_size": 65536
00:12:06.587      },
00:12:06.587      {
00:12:06.587        "name": "BaseBdev4",
00:12:06.587        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:06.587        "is_configured": true,
00:12:06.587        "data_offset": 0,
00:12:06.587        "data_size": 65536
00:12:06.587      }
00:12:06.587    ]
00:12:06.587  }'
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:06.587   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:06.846  [2024-12-16 11:33:32.885731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:06.846   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:06.846    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.104    11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:07.104   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:07.104    "name": "Existed_Raid",
00:12:07.104    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:07.104    "strip_size_kb": 0,
00:12:07.104    "state": "configuring",
00:12:07.105    "raid_level": "raid1",
00:12:07.105    "superblock": false,
00:12:07.105    "num_base_bdevs": 4,
00:12:07.105    "num_base_bdevs_discovered": 3,
00:12:07.105    "num_base_bdevs_operational": 4,
00:12:07.105    "base_bdevs_list": [
00:12:07.105      {
00:12:07.105        "name": "BaseBdev1",
00:12:07.105        "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:07.105        "is_configured": true,
00:12:07.105        "data_offset": 0,
00:12:07.105        "data_size": 65536
00:12:07.105      },
00:12:07.105      {
00:12:07.105        "name": null,
00:12:07.105        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:07.105        "is_configured": false,
00:12:07.105        "data_offset": 0,
00:12:07.105        "data_size": 65536
00:12:07.105      },
00:12:07.105      {
00:12:07.105        "name": "BaseBdev3",
00:12:07.105        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:07.105        "is_configured": true,
00:12:07.105        "data_offset": 0,
00:12:07.105        "data_size": 65536
00:12:07.105      },
00:12:07.105      {
00:12:07.105        "name": "BaseBdev4",
00:12:07.105        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:07.105        "is_configured": true,
00:12:07.105        "data_offset": 0,
00:12:07.105        "data_size": 65536
00:12:07.105      }
00:12:07.105    ]
00:12:07.105  }'
00:12:07.105   11:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:07.105   11:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.364    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:07.364    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:07.364    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.364    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:12:07.364    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:07.364   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:12:07.364   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:12:07.364   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:07.364   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.364  [2024-12-16 11:33:33.416913] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:07.623    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:07.623    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:07.623    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:07.623    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.623    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:07.623    "name": "Existed_Raid",
00:12:07.623    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:07.623    "strip_size_kb": 0,
00:12:07.623    "state": "configuring",
00:12:07.623    "raid_level": "raid1",
00:12:07.623    "superblock": false,
00:12:07.623    "num_base_bdevs": 4,
00:12:07.623    "num_base_bdevs_discovered": 2,
00:12:07.623    "num_base_bdevs_operational": 4,
00:12:07.623    "base_bdevs_list": [
00:12:07.623      {
00:12:07.623        "name": null,
00:12:07.623        "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:07.623        "is_configured": false,
00:12:07.623        "data_offset": 0,
00:12:07.623        "data_size": 65536
00:12:07.623      },
00:12:07.623      {
00:12:07.623        "name": null,
00:12:07.623        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:07.623        "is_configured": false,
00:12:07.623        "data_offset": 0,
00:12:07.623        "data_size": 65536
00:12:07.623      },
00:12:07.623      {
00:12:07.623        "name": "BaseBdev3",
00:12:07.623        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:07.623        "is_configured": true,
00:12:07.623        "data_offset": 0,
00:12:07.623        "data_size": 65536
00:12:07.623      },
00:12:07.623      {
00:12:07.623        "name": "BaseBdev4",
00:12:07.623        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:07.623        "is_configured": true,
00:12:07.623        "data_offset": 0,
00:12:07.623        "data_size": 65536
00:12:07.623      }
00:12:07.623    ]
00:12:07.623  }'
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:07.623   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.885    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:07.885    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:12:07.885    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:07.885    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.885    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:07.885  [2024-12-16 11:33:33.934823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:07.885   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:07.885    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:07.885    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:08.145    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.145    11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:08.145    11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:08.145   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:08.145    "name": "Existed_Raid",
00:12:08.145    "uuid": "00000000-0000-0000-0000-000000000000",
00:12:08.145    "strip_size_kb": 0,
00:12:08.145    "state": "configuring",
00:12:08.145    "raid_level": "raid1",
00:12:08.145    "superblock": false,
00:12:08.145    "num_base_bdevs": 4,
00:12:08.145    "num_base_bdevs_discovered": 3,
00:12:08.145    "num_base_bdevs_operational": 4,
00:12:08.145    "base_bdevs_list": [
00:12:08.145      {
00:12:08.145        "name": null,
00:12:08.145        "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:08.145        "is_configured": false,
00:12:08.145        "data_offset": 0,
00:12:08.145        "data_size": 65536
00:12:08.145      },
00:12:08.145      {
00:12:08.145        "name": "BaseBdev2",
00:12:08.145        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:08.145        "is_configured": true,
00:12:08.145        "data_offset": 0,
00:12:08.145        "data_size": 65536
00:12:08.145      },
00:12:08.145      {
00:12:08.145        "name": "BaseBdev3",
00:12:08.145        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:08.145        "is_configured": true,
00:12:08.145        "data_offset": 0,
00:12:08.145        "data_size": 65536
00:12:08.145      },
00:12:08.145      {
00:12:08.145        "name": "BaseBdev4",
00:12:08.145        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:08.145        "is_configured": true,
00:12:08.145        "data_offset": 0,
00:12:08.145        "data_size": 65536
00:12:08.145      }
00:12:08.145    ]
00:12:08.145  }'
00:12:08.145   11:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:08.145   11:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.403    11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:08.403    11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:12:08.403    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:08.403    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.403    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6486186a-73a5-4e74-a3f0-281638ed78f7
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.663  [2024-12-16 11:33:34.541229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:12:08.663  [2024-12-16 11:33:34.541384] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:12:08.663  [2024-12-16 11:33:34.541405] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:12:08.663  [2024-12-16 11:33:34.541720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:12:08.663  [2024-12-16 11:33:34.541882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:12:08.663  [2024-12-16 11:33:34.541894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:12:08.663  [2024-12-16 11:33:34.542109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:08.663  NewBaseBdev
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.663  [
00:12:08.663  {
00:12:08.663  "name": "NewBaseBdev",
00:12:08.663  "aliases": [
00:12:08.663  "6486186a-73a5-4e74-a3f0-281638ed78f7"
00:12:08.663  ],
00:12:08.663  "product_name": "Malloc disk",
00:12:08.663  "block_size": 512,
00:12:08.663  "num_blocks": 65536,
00:12:08.663  "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:08.663  "assigned_rate_limits": {
00:12:08.663  "rw_ios_per_sec": 0,
00:12:08.663  "rw_mbytes_per_sec": 0,
00:12:08.663  "r_mbytes_per_sec": 0,
00:12:08.663  "w_mbytes_per_sec": 0
00:12:08.663  },
00:12:08.663  "claimed": true,
00:12:08.663  "claim_type": "exclusive_write",
00:12:08.663  "zoned": false,
00:12:08.663  "supported_io_types": {
00:12:08.663  "read": true,
00:12:08.663  "write": true,
00:12:08.663  "unmap": true,
00:12:08.663  "flush": true,
00:12:08.663  "reset": true,
00:12:08.663  "nvme_admin": false,
00:12:08.663  "nvme_io": false,
00:12:08.663  "nvme_io_md": false,
00:12:08.663  "write_zeroes": true,
00:12:08.663  "zcopy": true,
00:12:08.663  "get_zone_info": false,
00:12:08.663  "zone_management": false,
00:12:08.663  "zone_append": false,
00:12:08.663  "compare": false,
00:12:08.663  "compare_and_write": false,
00:12:08.663  "abort": true,
00:12:08.663  "seek_hole": false,
00:12:08.663  "seek_data": false,
00:12:08.663  "copy": true,
00:12:08.663  "nvme_iov_md": false
00:12:08.663  },
00:12:08.663  "memory_domains": [
00:12:08.663  {
00:12:08.663  "dma_device_id": "system",
00:12:08.663  "dma_device_type": 1
00:12:08.663  },
00:12:08.663  {
00:12:08.663  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:08.663  "dma_device_type": 2
00:12:08.663  }
00:12:08.663  ],
00:12:08.663  "driver_specific": {}
00:12:08.663  }
00:12:08.663  ]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:08.663    11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:08.663    "name": "Existed_Raid",
00:12:08.663    "uuid": "003ff27d-746b-4c65-927f-1df5621dd6ed",
00:12:08.663    "strip_size_kb": 0,
00:12:08.663    "state": "online",
00:12:08.663    "raid_level": "raid1",
00:12:08.663    "superblock": false,
00:12:08.663    "num_base_bdevs": 4,
00:12:08.663    "num_base_bdevs_discovered": 4,
00:12:08.663    "num_base_bdevs_operational": 4,
00:12:08.663    "base_bdevs_list": [
00:12:08.663      {
00:12:08.663        "name": "NewBaseBdev",
00:12:08.663        "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:08.663        "is_configured": true,
00:12:08.663        "data_offset": 0,
00:12:08.663        "data_size": 65536
00:12:08.663      },
00:12:08.663      {
00:12:08.663        "name": "BaseBdev2",
00:12:08.663        "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:08.663        "is_configured": true,
00:12:08.663        "data_offset": 0,
00:12:08.663        "data_size": 65536
00:12:08.663      },
00:12:08.663      {
00:12:08.663        "name": "BaseBdev3",
00:12:08.663        "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:08.663        "is_configured": true,
00:12:08.663        "data_offset": 0,
00:12:08.663        "data_size": 65536
00:12:08.663      },
00:12:08.663      {
00:12:08.663        "name": "BaseBdev4",
00:12:08.663        "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:08.663        "is_configured": true,
00:12:08.663        "data_offset": 0,
00:12:08.663        "data_size": 65536
00:12:08.663      }
00:12:08.663    ]
00:12:08.663  }'
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:08.663   11:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.233  [2024-12-16 11:33:35.032874] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:12:09.233    "name": "Existed_Raid",
00:12:09.233    "aliases": [
00:12:09.233      "003ff27d-746b-4c65-927f-1df5621dd6ed"
00:12:09.233    ],
00:12:09.233    "product_name": "Raid Volume",
00:12:09.233    "block_size": 512,
00:12:09.233    "num_blocks": 65536,
00:12:09.233    "uuid": "003ff27d-746b-4c65-927f-1df5621dd6ed",
00:12:09.233    "assigned_rate_limits": {
00:12:09.233      "rw_ios_per_sec": 0,
00:12:09.233      "rw_mbytes_per_sec": 0,
00:12:09.233      "r_mbytes_per_sec": 0,
00:12:09.233      "w_mbytes_per_sec": 0
00:12:09.233    },
00:12:09.233    "claimed": false,
00:12:09.233    "zoned": false,
00:12:09.233    "supported_io_types": {
00:12:09.233      "read": true,
00:12:09.233      "write": true,
00:12:09.233      "unmap": false,
00:12:09.233      "flush": false,
00:12:09.233      "reset": true,
00:12:09.233      "nvme_admin": false,
00:12:09.233      "nvme_io": false,
00:12:09.233      "nvme_io_md": false,
00:12:09.233      "write_zeroes": true,
00:12:09.233      "zcopy": false,
00:12:09.233      "get_zone_info": false,
00:12:09.233      "zone_management": false,
00:12:09.233      "zone_append": false,
00:12:09.233      "compare": false,
00:12:09.233      "compare_and_write": false,
00:12:09.233      "abort": false,
00:12:09.233      "seek_hole": false,
00:12:09.233      "seek_data": false,
00:12:09.233      "copy": false,
00:12:09.233      "nvme_iov_md": false
00:12:09.233    },
00:12:09.233    "memory_domains": [
00:12:09.233      {
00:12:09.233        "dma_device_id": "system",
00:12:09.233        "dma_device_type": 1
00:12:09.233      },
00:12:09.233      {
00:12:09.233        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:09.233        "dma_device_type": 2
00:12:09.233      },
00:12:09.233      {
00:12:09.233        "dma_device_id": "system",
00:12:09.233        "dma_device_type": 1
00:12:09.233      },
00:12:09.233      {
00:12:09.233        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:09.233        "dma_device_type": 2
00:12:09.233      },
00:12:09.233      {
00:12:09.233        "dma_device_id": "system",
00:12:09.233        "dma_device_type": 1
00:12:09.233      },
00:12:09.233      {
00:12:09.233        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:09.233        "dma_device_type": 2
00:12:09.233      },
00:12:09.233      {
00:12:09.233        "dma_device_id": "system",
00:12:09.233        "dma_device_type": 1
00:12:09.233      },
00:12:09.233      {
00:12:09.233        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:09.233        "dma_device_type": 2
00:12:09.233      }
00:12:09.233    ],
00:12:09.233    "driver_specific": {
00:12:09.233      "raid": {
00:12:09.233        "uuid": "003ff27d-746b-4c65-927f-1df5621dd6ed",
00:12:09.233        "strip_size_kb": 0,
00:12:09.233        "state": "online",
00:12:09.233        "raid_level": "raid1",
00:12:09.233        "superblock": false,
00:12:09.233        "num_base_bdevs": 4,
00:12:09.233        "num_base_bdevs_discovered": 4,
00:12:09.233        "num_base_bdevs_operational": 4,
00:12:09.233        "base_bdevs_list": [
00:12:09.233          {
00:12:09.233            "name": "NewBaseBdev",
00:12:09.233            "uuid": "6486186a-73a5-4e74-a3f0-281638ed78f7",
00:12:09.233            "is_configured": true,
00:12:09.233            "data_offset": 0,
00:12:09.233            "data_size": 65536
00:12:09.233          },
00:12:09.233          {
00:12:09.233            "name": "BaseBdev2",
00:12:09.233            "uuid": "e59df4ff-c6a0-4565-b17d-b64dbc2c90c1",
00:12:09.233            "is_configured": true,
00:12:09.233            "data_offset": 0,
00:12:09.233            "data_size": 65536
00:12:09.233          },
00:12:09.233          {
00:12:09.233            "name": "BaseBdev3",
00:12:09.233            "uuid": "51d5ddbc-5390-43cf-a376-61a1c77f4b4b",
00:12:09.233            "is_configured": true,
00:12:09.233            "data_offset": 0,
00:12:09.233            "data_size": 65536
00:12:09.233          },
00:12:09.233          {
00:12:09.233            "name": "BaseBdev4",
00:12:09.233            "uuid": "1fb57dc4-aaca-4a73-9d59-773fe3d87b46",
00:12:09.233            "is_configured": true,
00:12:09.233            "data_offset": 0,
00:12:09.233            "data_size": 65536
00:12:09.233          }
00:12:09.233        ]
00:12:09.233      }
00:12:09.233    }
00:12:09.233  }'
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:12:09.233  BaseBdev2
00:12:09.233  BaseBdev3
00:12:09.233  BaseBdev4'
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:09.233    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:09.233   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:09.492    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:12:09.492    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:09.492    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.492    11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:09.492    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.492  [2024-12-16 11:33:35.355931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:12:09.492  [2024-12-16 11:33:35.355963] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:09.492  [2024-12-16 11:33:35.356046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:09.492  [2024-12-16 11:33:35.356327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:09.492  [2024-12-16 11:33:35.356346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84308
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84308 ']'
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84308
00:12:09.492    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:12:09.492    11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84308
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84308'
00:12:09.492  killing process with pid 84308
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84308
00:12:09.492  [2024-12-16 11:33:35.397764] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:12:09.492   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84308
00:12:09.492  [2024-12-16 11:33:35.438736] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:12:09.751  ************************************
00:12:09.751  END TEST raid_state_function_test
00:12:09.751  ************************************
00:12:09.751  
00:12:09.751  real	0m9.856s
00:12:09.751  user	0m16.893s
00:12:09.751  sys	0m1.998s
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:12:09.751   11:33:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true
00:12:09.751   11:33:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:12:09.751   11:33:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:12:09.751   11:33:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:12:09.751  ************************************
00:12:09.751  START TEST raid_state_function_test_sb
00:12:09.751  ************************************
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:12:09.751   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:12:09.752    11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84963
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84963'
00:12:09.752  Process raid pid: 84963
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84963
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84963 ']'
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:12:09.752  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:12:09.752   11:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:10.011  [2024-12-16 11:33:35.852875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:12:10.011  [2024-12-16 11:33:35.853094] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:10.011  [2024-12-16 11:33:36.002994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:10.011  [2024-12-16 11:33:36.056119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:12:10.270  [2024-12-16 11:33:36.102726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:10.270  [2024-12-16 11:33:36.102855] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:10.838  [2024-12-16 11:33:36.789806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:12:10.838  [2024-12-16 11:33:36.789887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:12:10.838  [2024-12-16 11:33:36.789902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:12:10.838  [2024-12-16 11:33:36.789913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:12:10.838  [2024-12-16 11:33:36.789925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:12:10.838  [2024-12-16 11:33:36.789939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:12:10.838  [2024-12-16 11:33:36.789946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:12:10.838  [2024-12-16 11:33:36.789956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:10.838   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:10.839    11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:10.839    11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:10.839    11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:10.839    11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:10.839    11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:10.839   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:10.839    "name": "Existed_Raid",
00:12:10.839    "uuid": "f43e52ae-8ced-4a36-811c-254865d7a64e",
00:12:10.839    "strip_size_kb": 0,
00:12:10.839    "state": "configuring",
00:12:10.839    "raid_level": "raid1",
00:12:10.839    "superblock": true,
00:12:10.839    "num_base_bdevs": 4,
00:12:10.839    "num_base_bdevs_discovered": 0,
00:12:10.839    "num_base_bdevs_operational": 4,
00:12:10.839    "base_bdevs_list": [
00:12:10.839      {
00:12:10.839        "name": "BaseBdev1",
00:12:10.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:10.839        "is_configured": false,
00:12:10.839        "data_offset": 0,
00:12:10.839        "data_size": 0
00:12:10.839      },
00:12:10.839      {
00:12:10.839        "name": "BaseBdev2",
00:12:10.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:10.839        "is_configured": false,
00:12:10.839        "data_offset": 0,
00:12:10.839        "data_size": 0
00:12:10.839      },
00:12:10.839      {
00:12:10.839        "name": "BaseBdev3",
00:12:10.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:10.839        "is_configured": false,
00:12:10.839        "data_offset": 0,
00:12:10.839        "data_size": 0
00:12:10.839      },
00:12:10.839      {
00:12:10.839        "name": "BaseBdev4",
00:12:10.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:10.839        "is_configured": false,
00:12:10.839        "data_offset": 0,
00:12:10.839        "data_size": 0
00:12:10.839      }
00:12:10.839    ]
00:12:10.839  }'
00:12:10.839   11:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:10.839   11:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.407  [2024-12-16 11:33:37.272864] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:12:11.407  [2024-12-16 11:33:37.272919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.407  [2024-12-16 11:33:37.284927] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:12:11.407  [2024-12-16 11:33:37.284992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:12:11.407  [2024-12-16 11:33:37.285003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:12:11.407  [2024-12-16 11:33:37.285014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:12:11.407  [2024-12-16 11:33:37.285021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:12:11.407  [2024-12-16 11:33:37.285031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:12:11.407  [2024-12-16 11:33:37.285038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:12:11.407  [2024-12-16 11:33:37.285048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.407  [2024-12-16 11:33:37.306333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:11.407  BaseBdev1
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.407  [
00:12:11.407  {
00:12:11.407  "name": "BaseBdev1",
00:12:11.407  "aliases": [
00:12:11.407  "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58"
00:12:11.407  ],
00:12:11.407  "product_name": "Malloc disk",
00:12:11.407  "block_size": 512,
00:12:11.407  "num_blocks": 65536,
00:12:11.407  "uuid": "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58",
00:12:11.407  "assigned_rate_limits": {
00:12:11.407  "rw_ios_per_sec": 0,
00:12:11.407  "rw_mbytes_per_sec": 0,
00:12:11.407  "r_mbytes_per_sec": 0,
00:12:11.407  "w_mbytes_per_sec": 0
00:12:11.407  },
00:12:11.407  "claimed": true,
00:12:11.407  "claim_type": "exclusive_write",
00:12:11.407  "zoned": false,
00:12:11.407  "supported_io_types": {
00:12:11.407  "read": true,
00:12:11.407  "write": true,
00:12:11.407  "unmap": true,
00:12:11.407  "flush": true,
00:12:11.407  "reset": true,
00:12:11.407  "nvme_admin": false,
00:12:11.407  "nvme_io": false,
00:12:11.407  "nvme_io_md": false,
00:12:11.407  "write_zeroes": true,
00:12:11.407  "zcopy": true,
00:12:11.407  "get_zone_info": false,
00:12:11.407  "zone_management": false,
00:12:11.407  "zone_append": false,
00:12:11.407  "compare": false,
00:12:11.407  "compare_and_write": false,
00:12:11.407  "abort": true,
00:12:11.407  "seek_hole": false,
00:12:11.407  "seek_data": false,
00:12:11.407  "copy": true,
00:12:11.407  "nvme_iov_md": false
00:12:11.407  },
00:12:11.407  "memory_domains": [
00:12:11.407  {
00:12:11.407  "dma_device_id": "system",
00:12:11.407  "dma_device_type": 1
00:12:11.407  },
00:12:11.407  {
00:12:11.407  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:11.407  "dma_device_type": 2
00:12:11.407  }
00:12:11.407  ],
00:12:11.407  "driver_specific": {}
00:12:11.407  }
00:12:11.407  ]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:11.407    11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:11.407    11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:11.407    11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.407    11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.407    11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.407   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:11.407    "name": "Existed_Raid",
00:12:11.407    "uuid": "68e317f0-a528-4f9a-bca6-5298302a2926",
00:12:11.407    "strip_size_kb": 0,
00:12:11.407    "state": "configuring",
00:12:11.407    "raid_level": "raid1",
00:12:11.407    "superblock": true,
00:12:11.407    "num_base_bdevs": 4,
00:12:11.407    "num_base_bdevs_discovered": 1,
00:12:11.407    "num_base_bdevs_operational": 4,
00:12:11.407    "base_bdevs_list": [
00:12:11.407      {
00:12:11.407        "name": "BaseBdev1",
00:12:11.407        "uuid": "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58",
00:12:11.407        "is_configured": true,
00:12:11.407        "data_offset": 2048,
00:12:11.408        "data_size": 63488
00:12:11.408      },
00:12:11.408      {
00:12:11.408        "name": "BaseBdev2",
00:12:11.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:11.408        "is_configured": false,
00:12:11.408        "data_offset": 0,
00:12:11.408        "data_size": 0
00:12:11.408      },
00:12:11.408      {
00:12:11.408        "name": "BaseBdev3",
00:12:11.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:11.408        "is_configured": false,
00:12:11.408        "data_offset": 0,
00:12:11.408        "data_size": 0
00:12:11.408      },
00:12:11.408      {
00:12:11.408        "name": "BaseBdev4",
00:12:11.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:11.408        "is_configured": false,
00:12:11.408        "data_offset": 0,
00:12:11.408        "data_size": 0
00:12:11.408      }
00:12:11.408    ]
00:12:11.408  }'
00:12:11.408   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:11.408   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.975  [2024-12-16 11:33:37.805598] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:12:11.975  [2024-12-16 11:33:37.805678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.975  [2024-12-16 11:33:37.813675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:11.975  [2024-12-16 11:33:37.815973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:12:11.975  [2024-12-16 11:33:37.816035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:12:11.975  [2024-12-16 11:33:37.816049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:12:11.975  [2024-12-16 11:33:37.816060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:12:11.975  [2024-12-16 11:33:37.816068] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:12:11.975  [2024-12-16 11:33:37.816078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:11.975    11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:11.975    11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:11.975    11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:11.975    11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:11.975    11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:11.975    "name": "Existed_Raid",
00:12:11.975    "uuid": "86e90aac-a4c7-41f0-8d03-fd24f985e06f",
00:12:11.975    "strip_size_kb": 0,
00:12:11.975    "state": "configuring",
00:12:11.975    "raid_level": "raid1",
00:12:11.975    "superblock": true,
00:12:11.975    "num_base_bdevs": 4,
00:12:11.975    "num_base_bdevs_discovered": 1,
00:12:11.975    "num_base_bdevs_operational": 4,
00:12:11.975    "base_bdevs_list": [
00:12:11.975      {
00:12:11.975        "name": "BaseBdev1",
00:12:11.975        "uuid": "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58",
00:12:11.975        "is_configured": true,
00:12:11.975        "data_offset": 2048,
00:12:11.975        "data_size": 63488
00:12:11.975      },
00:12:11.975      {
00:12:11.975        "name": "BaseBdev2",
00:12:11.975        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:11.975        "is_configured": false,
00:12:11.975        "data_offset": 0,
00:12:11.975        "data_size": 0
00:12:11.975      },
00:12:11.975      {
00:12:11.975        "name": "BaseBdev3",
00:12:11.975        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:11.975        "is_configured": false,
00:12:11.975        "data_offset": 0,
00:12:11.975        "data_size": 0
00:12:11.975      },
00:12:11.975      {
00:12:11.975        "name": "BaseBdev4",
00:12:11.975        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:11.975        "is_configured": false,
00:12:11.975        "data_offset": 0,
00:12:11.975        "data_size": 0
00:12:11.975      }
00:12:11.975    ]
00:12:11.975  }'
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:11.975   11:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.235   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:12:12.235   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:12.235   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.495  [2024-12-16 11:33:38.309786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:12.495  BaseBdev2
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:12.495   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.495  [
00:12:12.495  {
00:12:12.495  "name": "BaseBdev2",
00:12:12.495  "aliases": [
00:12:12.495  "4cccb225-f6d2-4b4a-9a81-57cf81faeb8e"
00:12:12.495  ],
00:12:12.495  "product_name": "Malloc disk",
00:12:12.495  "block_size": 512,
00:12:12.495  "num_blocks": 65536,
00:12:12.495  "uuid": "4cccb225-f6d2-4b4a-9a81-57cf81faeb8e",
00:12:12.495  "assigned_rate_limits": {
00:12:12.495  "rw_ios_per_sec": 0,
00:12:12.495  "rw_mbytes_per_sec": 0,
00:12:12.495  "r_mbytes_per_sec": 0,
00:12:12.495  "w_mbytes_per_sec": 0
00:12:12.495  },
00:12:12.495  "claimed": true,
00:12:12.495  "claim_type": "exclusive_write",
00:12:12.495  "zoned": false,
00:12:12.495  "supported_io_types": {
00:12:12.495  "read": true,
00:12:12.495  "write": true,
00:12:12.495  "unmap": true,
00:12:12.495  "flush": true,
00:12:12.495  "reset": true,
00:12:12.495  "nvme_admin": false,
00:12:12.495  "nvme_io": false,
00:12:12.495  "nvme_io_md": false,
00:12:12.495  "write_zeroes": true,
00:12:12.495  "zcopy": true,
00:12:12.495  "get_zone_info": false,
00:12:12.495  "zone_management": false,
00:12:12.495  "zone_append": false,
00:12:12.495  "compare": false,
00:12:12.495  "compare_and_write": false,
00:12:12.495  "abort": true,
00:12:12.495  "seek_hole": false,
00:12:12.495  "seek_data": false,
00:12:12.495  "copy": true,
00:12:12.495  "nvme_iov_md": false
00:12:12.495  },
00:12:12.495  "memory_domains": [
00:12:12.495  {
00:12:12.495  "dma_device_id": "system",
00:12:12.495  "dma_device_type": 1
00:12:12.495  },
00:12:12.495  {
00:12:12.495  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:12.495  "dma_device_type": 2
00:12:12.495  }
00:12:12.495  ],
00:12:12.495  "driver_specific": {}
00:12:12.495  }
00:12:12.495  ]
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:12.496    11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:12.496    11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:12.496    11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.496    11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:12.496    11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:12.496    "name": "Existed_Raid",
00:12:12.496    "uuid": "86e90aac-a4c7-41f0-8d03-fd24f985e06f",
00:12:12.496    "strip_size_kb": 0,
00:12:12.496    "state": "configuring",
00:12:12.496    "raid_level": "raid1",
00:12:12.496    "superblock": true,
00:12:12.496    "num_base_bdevs": 4,
00:12:12.496    "num_base_bdevs_discovered": 2,
00:12:12.496    "num_base_bdevs_operational": 4,
00:12:12.496    "base_bdevs_list": [
00:12:12.496      {
00:12:12.496        "name": "BaseBdev1",
00:12:12.496        "uuid": "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58",
00:12:12.496        "is_configured": true,
00:12:12.496        "data_offset": 2048,
00:12:12.496        "data_size": 63488
00:12:12.496      },
00:12:12.496      {
00:12:12.496        "name": "BaseBdev2",
00:12:12.496        "uuid": "4cccb225-f6d2-4b4a-9a81-57cf81faeb8e",
00:12:12.496        "is_configured": true,
00:12:12.496        "data_offset": 2048,
00:12:12.496        "data_size": 63488
00:12:12.496      },
00:12:12.496      {
00:12:12.496        "name": "BaseBdev3",
00:12:12.496        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:12.496        "is_configured": false,
00:12:12.496        "data_offset": 0,
00:12:12.496        "data_size": 0
00:12:12.496      },
00:12:12.496      {
00:12:12.496        "name": "BaseBdev4",
00:12:12.496        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:12.496        "is_configured": false,
00:12:12.496        "data_offset": 0,
00:12:12.496        "data_size": 0
00:12:12.496      }
00:12:12.496    ]
00:12:12.496  }'
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:12.496   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.754  [2024-12-16 11:33:38.800685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:12.754  BaseBdev3
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:12.754   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.062  [
00:12:13.062  {
00:12:13.062  "name": "BaseBdev3",
00:12:13.062  "aliases": [
00:12:13.062  "d7c8de4b-b936-40f5-a71a-88665b2cfd42"
00:12:13.062  ],
00:12:13.062  "product_name": "Malloc disk",
00:12:13.062  "block_size": 512,
00:12:13.062  "num_blocks": 65536,
00:12:13.062  "uuid": "d7c8de4b-b936-40f5-a71a-88665b2cfd42",
00:12:13.062  "assigned_rate_limits": {
00:12:13.062  "rw_ios_per_sec": 0,
00:12:13.062  "rw_mbytes_per_sec": 0,
00:12:13.062  "r_mbytes_per_sec": 0,
00:12:13.062  "w_mbytes_per_sec": 0
00:12:13.062  },
00:12:13.062  "claimed": true,
00:12:13.062  "claim_type": "exclusive_write",
00:12:13.062  "zoned": false,
00:12:13.062  "supported_io_types": {
00:12:13.062  "read": true,
00:12:13.062  "write": true,
00:12:13.062  "unmap": true,
00:12:13.062  "flush": true,
00:12:13.062  "reset": true,
00:12:13.062  "nvme_admin": false,
00:12:13.062  "nvme_io": false,
00:12:13.062  "nvme_io_md": false,
00:12:13.062  "write_zeroes": true,
00:12:13.062  "zcopy": true,
00:12:13.062  "get_zone_info": false,
00:12:13.062  "zone_management": false,
00:12:13.062  "zone_append": false,
00:12:13.062  "compare": false,
00:12:13.062  "compare_and_write": false,
00:12:13.062  "abort": true,
00:12:13.062  "seek_hole": false,
00:12:13.062  "seek_data": false,
00:12:13.062  "copy": true,
00:12:13.062  "nvme_iov_md": false
00:12:13.062  },
00:12:13.062  "memory_domains": [
00:12:13.062  {
00:12:13.062  "dma_device_id": "system",
00:12:13.062  "dma_device_type": 1
00:12:13.062  },
00:12:13.062  {
00:12:13.062  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:13.062  "dma_device_type": 2
00:12:13.062  }
00:12:13.062  ],
00:12:13.062  "driver_specific": {}
00:12:13.062  }
00:12:13.062  ]
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:13.062    11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:13.062    11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:13.062    11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:13.062    11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.062    11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:13.062    "name": "Existed_Raid",
00:12:13.062    "uuid": "86e90aac-a4c7-41f0-8d03-fd24f985e06f",
00:12:13.062    "strip_size_kb": 0,
00:12:13.062    "state": "configuring",
00:12:13.062    "raid_level": "raid1",
00:12:13.062    "superblock": true,
00:12:13.062    "num_base_bdevs": 4,
00:12:13.062    "num_base_bdevs_discovered": 3,
00:12:13.062    "num_base_bdevs_operational": 4,
00:12:13.062    "base_bdevs_list": [
00:12:13.062      {
00:12:13.062        "name": "BaseBdev1",
00:12:13.062        "uuid": "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58",
00:12:13.062        "is_configured": true,
00:12:13.062        "data_offset": 2048,
00:12:13.062        "data_size": 63488
00:12:13.062      },
00:12:13.062      {
00:12:13.062        "name": "BaseBdev2",
00:12:13.062        "uuid": "4cccb225-f6d2-4b4a-9a81-57cf81faeb8e",
00:12:13.062        "is_configured": true,
00:12:13.062        "data_offset": 2048,
00:12:13.062        "data_size": 63488
00:12:13.062      },
00:12:13.062      {
00:12:13.062        "name": "BaseBdev3",
00:12:13.062        "uuid": "d7c8de4b-b936-40f5-a71a-88665b2cfd42",
00:12:13.062        "is_configured": true,
00:12:13.062        "data_offset": 2048,
00:12:13.062        "data_size": 63488
00:12:13.062      },
00:12:13.062      {
00:12:13.062        "name": "BaseBdev4",
00:12:13.062        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:13.062        "is_configured": false,
00:12:13.062        "data_offset": 0,
00:12:13.062        "data_size": 0
00:12:13.062      }
00:12:13.062    ]
00:12:13.062  }'
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:13.062   11:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.321  [2024-12-16 11:33:39.347217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:12:13.321  [2024-12-16 11:33:39.347594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:12:13.321  [2024-12-16 11:33:39.347655] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:13.321  BaseBdev4
00:12:13.321  [2024-12-16 11:33:39.348016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:12:13.321  [2024-12-16 11:33:39.348228] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:12:13.321  [2024-12-16 11:33:39.348282] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:12:13.321  [2024-12-16 11:33:39.348483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.321  [
00:12:13.321  {
00:12:13.321  "name": "BaseBdev4",
00:12:13.321  "aliases": [
00:12:13.321  "7250a53b-d116-49c7-b792-52339a85f36b"
00:12:13.321  ],
00:12:13.321  "product_name": "Malloc disk",
00:12:13.321  "block_size": 512,
00:12:13.321  "num_blocks": 65536,
00:12:13.321  "uuid": "7250a53b-d116-49c7-b792-52339a85f36b",
00:12:13.321  "assigned_rate_limits": {
00:12:13.321  "rw_ios_per_sec": 0,
00:12:13.321  "rw_mbytes_per_sec": 0,
00:12:13.321  "r_mbytes_per_sec": 0,
00:12:13.321  "w_mbytes_per_sec": 0
00:12:13.321  },
00:12:13.321  "claimed": true,
00:12:13.321  "claim_type": "exclusive_write",
00:12:13.321  "zoned": false,
00:12:13.321  "supported_io_types": {
00:12:13.321  "read": true,
00:12:13.321  "write": true,
00:12:13.321  "unmap": true,
00:12:13.321  "flush": true,
00:12:13.321  "reset": true,
00:12:13.321  "nvme_admin": false,
00:12:13.321  "nvme_io": false,
00:12:13.321  "nvme_io_md": false,
00:12:13.321  "write_zeroes": true,
00:12:13.321  "zcopy": true,
00:12:13.321  "get_zone_info": false,
00:12:13.321  "zone_management": false,
00:12:13.321  "zone_append": false,
00:12:13.321  "compare": false,
00:12:13.321  "compare_and_write": false,
00:12:13.321  "abort": true,
00:12:13.321  "seek_hole": false,
00:12:13.321  "seek_data": false,
00:12:13.321  "copy": true,
00:12:13.321  "nvme_iov_md": false
00:12:13.321  },
00:12:13.321  "memory_domains": [
00:12:13.321  {
00:12:13.321  "dma_device_id": "system",
00:12:13.321  "dma_device_type": 1
00:12:13.321  },
00:12:13.321  {
00:12:13.321  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:13.321  "dma_device_type": 2
00:12:13.321  }
00:12:13.321  ],
00:12:13.321  "driver_specific": {}
00:12:13.321  }
00:12:13.321  ]
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:12:13.321   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:13.580    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:13.580    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:13.580    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.580    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:13.580    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:13.580    "name": "Existed_Raid",
00:12:13.580    "uuid": "86e90aac-a4c7-41f0-8d03-fd24f985e06f",
00:12:13.580    "strip_size_kb": 0,
00:12:13.580    "state": "online",
00:12:13.580    "raid_level": "raid1",
00:12:13.580    "superblock": true,
00:12:13.580    "num_base_bdevs": 4,
00:12:13.580    "num_base_bdevs_discovered": 4,
00:12:13.580    "num_base_bdevs_operational": 4,
00:12:13.580    "base_bdevs_list": [
00:12:13.580      {
00:12:13.580        "name": "BaseBdev1",
00:12:13.580        "uuid": "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58",
00:12:13.580        "is_configured": true,
00:12:13.580        "data_offset": 2048,
00:12:13.580        "data_size": 63488
00:12:13.580      },
00:12:13.580      {
00:12:13.580        "name": "BaseBdev2",
00:12:13.580        "uuid": "4cccb225-f6d2-4b4a-9a81-57cf81faeb8e",
00:12:13.580        "is_configured": true,
00:12:13.580        "data_offset": 2048,
00:12:13.580        "data_size": 63488
00:12:13.580      },
00:12:13.580      {
00:12:13.580        "name": "BaseBdev3",
00:12:13.580        "uuid": "d7c8de4b-b936-40f5-a71a-88665b2cfd42",
00:12:13.580        "is_configured": true,
00:12:13.580        "data_offset": 2048,
00:12:13.580        "data_size": 63488
00:12:13.580      },
00:12:13.580      {
00:12:13.580        "name": "BaseBdev4",
00:12:13.580        "uuid": "7250a53b-d116-49c7-b792-52339a85f36b",
00:12:13.580        "is_configured": true,
00:12:13.580        "data_offset": 2048,
00:12:13.580        "data_size": 63488
00:12:13.580      }
00:12:13.580    ]
00:12:13.580  }'
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:13.580   11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.839   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:12:13.839   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:12:13.839   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:12:13.839   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:12:13.839   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:12:13.839   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:12:13.839    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:12:13.839    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:13.839    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:13.839    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:12:13.839  [2024-12-16 11:33:39.862853] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:13.839    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.097   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:12:14.097    "name": "Existed_Raid",
00:12:14.097    "aliases": [
00:12:14.097      "86e90aac-a4c7-41f0-8d03-fd24f985e06f"
00:12:14.097    ],
00:12:14.097    "product_name": "Raid Volume",
00:12:14.098    "block_size": 512,
00:12:14.098    "num_blocks": 63488,
00:12:14.098    "uuid": "86e90aac-a4c7-41f0-8d03-fd24f985e06f",
00:12:14.098    "assigned_rate_limits": {
00:12:14.098      "rw_ios_per_sec": 0,
00:12:14.098      "rw_mbytes_per_sec": 0,
00:12:14.098      "r_mbytes_per_sec": 0,
00:12:14.098      "w_mbytes_per_sec": 0
00:12:14.098    },
00:12:14.098    "claimed": false,
00:12:14.098    "zoned": false,
00:12:14.098    "supported_io_types": {
00:12:14.098      "read": true,
00:12:14.098      "write": true,
00:12:14.098      "unmap": false,
00:12:14.098      "flush": false,
00:12:14.098      "reset": true,
00:12:14.098      "nvme_admin": false,
00:12:14.098      "nvme_io": false,
00:12:14.098      "nvme_io_md": false,
00:12:14.098      "write_zeroes": true,
00:12:14.098      "zcopy": false,
00:12:14.098      "get_zone_info": false,
00:12:14.098      "zone_management": false,
00:12:14.098      "zone_append": false,
00:12:14.098      "compare": false,
00:12:14.098      "compare_and_write": false,
00:12:14.098      "abort": false,
00:12:14.098      "seek_hole": false,
00:12:14.098      "seek_data": false,
00:12:14.098      "copy": false,
00:12:14.098      "nvme_iov_md": false
00:12:14.098    },
00:12:14.098    "memory_domains": [
00:12:14.098      {
00:12:14.098        "dma_device_id": "system",
00:12:14.098        "dma_device_type": 1
00:12:14.098      },
00:12:14.098      {
00:12:14.098        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:14.098        "dma_device_type": 2
00:12:14.098      },
00:12:14.098      {
00:12:14.098        "dma_device_id": "system",
00:12:14.098        "dma_device_type": 1
00:12:14.098      },
00:12:14.098      {
00:12:14.098        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:14.098        "dma_device_type": 2
00:12:14.098      },
00:12:14.098      {
00:12:14.098        "dma_device_id": "system",
00:12:14.098        "dma_device_type": 1
00:12:14.098      },
00:12:14.098      {
00:12:14.098        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:14.098        "dma_device_type": 2
00:12:14.098      },
00:12:14.098      {
00:12:14.098        "dma_device_id": "system",
00:12:14.098        "dma_device_type": 1
00:12:14.098      },
00:12:14.098      {
00:12:14.098        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:14.098        "dma_device_type": 2
00:12:14.098      }
00:12:14.098    ],
00:12:14.098    "driver_specific": {
00:12:14.098      "raid": {
00:12:14.098        "uuid": "86e90aac-a4c7-41f0-8d03-fd24f985e06f",
00:12:14.098        "strip_size_kb": 0,
00:12:14.098        "state": "online",
00:12:14.098        "raid_level": "raid1",
00:12:14.098        "superblock": true,
00:12:14.098        "num_base_bdevs": 4,
00:12:14.098        "num_base_bdevs_discovered": 4,
00:12:14.098        "num_base_bdevs_operational": 4,
00:12:14.098        "base_bdevs_list": [
00:12:14.098          {
00:12:14.098            "name": "BaseBdev1",
00:12:14.098            "uuid": "9f3a5cd3-461b-42b8-b04e-ee3c1c434d58",
00:12:14.098            "is_configured": true,
00:12:14.098            "data_offset": 2048,
00:12:14.098            "data_size": 63488
00:12:14.098          },
00:12:14.098          {
00:12:14.098            "name": "BaseBdev2",
00:12:14.098            "uuid": "4cccb225-f6d2-4b4a-9a81-57cf81faeb8e",
00:12:14.098            "is_configured": true,
00:12:14.098            "data_offset": 2048,
00:12:14.098            "data_size": 63488
00:12:14.098          },
00:12:14.098          {
00:12:14.098            "name": "BaseBdev3",
00:12:14.098            "uuid": "d7c8de4b-b936-40f5-a71a-88665b2cfd42",
00:12:14.098            "is_configured": true,
00:12:14.098            "data_offset": 2048,
00:12:14.098            "data_size": 63488
00:12:14.098          },
00:12:14.098          {
00:12:14.098            "name": "BaseBdev4",
00:12:14.098            "uuid": "7250a53b-d116-49c7-b792-52339a85f36b",
00:12:14.098            "is_configured": true,
00:12:14.098            "data_offset": 2048,
00:12:14.098            "data_size": 63488
00:12:14.098          }
00:12:14.098        ]
00:12:14.098      }
00:12:14.098    }
00:12:14.098  }'
00:12:14.098    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:12:14.098   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:12:14.098  BaseBdev2
00:12:14.098  BaseBdev3
00:12:14.098  BaseBdev4'
00:12:14.098    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:14.098   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:12:14.098   11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:14.098    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:14.098    11:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:12:14.098    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.098    11:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.098    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.098   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.098  [2024-12-16 11:33:40.158010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:14.358    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:14.358    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:14.358    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.358    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.358    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:14.358    "name": "Existed_Raid",
00:12:14.358    "uuid": "86e90aac-a4c7-41f0-8d03-fd24f985e06f",
00:12:14.358    "strip_size_kb": 0,
00:12:14.358    "state": "online",
00:12:14.358    "raid_level": "raid1",
00:12:14.358    "superblock": true,
00:12:14.358    "num_base_bdevs": 4,
00:12:14.358    "num_base_bdevs_discovered": 3,
00:12:14.358    "num_base_bdevs_operational": 3,
00:12:14.358    "base_bdevs_list": [
00:12:14.358      {
00:12:14.358        "name": null,
00:12:14.358        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:14.358        "is_configured": false,
00:12:14.358        "data_offset": 0,
00:12:14.358        "data_size": 63488
00:12:14.358      },
00:12:14.358      {
00:12:14.358        "name": "BaseBdev2",
00:12:14.358        "uuid": "4cccb225-f6d2-4b4a-9a81-57cf81faeb8e",
00:12:14.358        "is_configured": true,
00:12:14.358        "data_offset": 2048,
00:12:14.358        "data_size": 63488
00:12:14.358      },
00:12:14.358      {
00:12:14.358        "name": "BaseBdev3",
00:12:14.358        "uuid": "d7c8de4b-b936-40f5-a71a-88665b2cfd42",
00:12:14.358        "is_configured": true,
00:12:14.358        "data_offset": 2048,
00:12:14.358        "data_size": 63488
00:12:14.358      },
00:12:14.358      {
00:12:14.358        "name": "BaseBdev4",
00:12:14.358        "uuid": "7250a53b-d116-49c7-b792-52339a85f36b",
00:12:14.358        "is_configured": true,
00:12:14.358        "data_offset": 2048,
00:12:14.358        "data_size": 63488
00:12:14.358      }
00:12:14.358    ]
00:12:14.358  }'
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:14.358   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.617   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:12:14.617   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:14.617    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:14.617    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.617    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:12:14.617    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.617    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877  [2024-12-16 11:33:40.705176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877  [2024-12-16 11:33:40.761220] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877  [2024-12-16 11:33:40.829236] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:12:14.877  [2024-12-16 11:33:40.829448] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:14.877  [2024-12-16 11:33:40.841840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:14.877  [2024-12-16 11:33:40.841991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:14.877  [2024-12-16 11:33:40.842051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877    11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877  BaseBdev2
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:14.877  [
00:12:14.877  {
00:12:14.877  "name": "BaseBdev2",
00:12:14.877  "aliases": [
00:12:14.877  "81a0a11e-3ba1-44e4-bee7-e1089b091d80"
00:12:14.877  ],
00:12:14.877  "product_name": "Malloc disk",
00:12:14.877  "block_size": 512,
00:12:14.877  "num_blocks": 65536,
00:12:14.877  "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:14.877  "assigned_rate_limits": {
00:12:14.877  "rw_ios_per_sec": 0,
00:12:14.877  "rw_mbytes_per_sec": 0,
00:12:14.877  "r_mbytes_per_sec": 0,
00:12:14.877  "w_mbytes_per_sec": 0
00:12:14.877  },
00:12:14.877  "claimed": false,
00:12:14.877  "zoned": false,
00:12:14.877  "supported_io_types": {
00:12:14.877  "read": true,
00:12:14.877  "write": true,
00:12:14.877  "unmap": true,
00:12:14.877  "flush": true,
00:12:14.877  "reset": true,
00:12:14.877  "nvme_admin": false,
00:12:14.877  "nvme_io": false,
00:12:14.877  "nvme_io_md": false,
00:12:14.877  "write_zeroes": true,
00:12:14.877  "zcopy": true,
00:12:14.877  "get_zone_info": false,
00:12:14.877  "zone_management": false,
00:12:14.877  "zone_append": false,
00:12:14.877  "compare": false,
00:12:14.877  "compare_and_write": false,
00:12:14.877  "abort": true,
00:12:14.877  "seek_hole": false,
00:12:14.877  "seek_data": false,
00:12:14.877  "copy": true,
00:12:14.877  "nvme_iov_md": false
00:12:14.877  },
00:12:14.877  "memory_domains": [
00:12:14.877  {
00:12:14.877  "dma_device_id": "system",
00:12:14.877  "dma_device_type": 1
00:12:14.877  },
00:12:14.877  {
00:12:14.877  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:14.877  "dma_device_type": 2
00:12:14.877  }
00:12:14.877  ],
00:12:14.877  "driver_specific": {}
00:12:14.877  }
00:12:14.877  ]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:14.877   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.137  BaseBdev3
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.137  [
00:12:15.137  {
00:12:15.137  "name": "BaseBdev3",
00:12:15.137  "aliases": [
00:12:15.137  "354b6c21-c91e-464f-91cf-463ebf827525"
00:12:15.137  ],
00:12:15.137  "product_name": "Malloc disk",
00:12:15.137  "block_size": 512,
00:12:15.137  "num_blocks": 65536,
00:12:15.137  "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:15.137  "assigned_rate_limits": {
00:12:15.137  "rw_ios_per_sec": 0,
00:12:15.137  "rw_mbytes_per_sec": 0,
00:12:15.137  "r_mbytes_per_sec": 0,
00:12:15.137  "w_mbytes_per_sec": 0
00:12:15.137  },
00:12:15.137  "claimed": false,
00:12:15.137  "zoned": false,
00:12:15.137  "supported_io_types": {
00:12:15.137  "read": true,
00:12:15.137  "write": true,
00:12:15.137  "unmap": true,
00:12:15.137  "flush": true,
00:12:15.137  "reset": true,
00:12:15.137  "nvme_admin": false,
00:12:15.137  "nvme_io": false,
00:12:15.137  "nvme_io_md": false,
00:12:15.137  "write_zeroes": true,
00:12:15.137  "zcopy": true,
00:12:15.137  "get_zone_info": false,
00:12:15.137  "zone_management": false,
00:12:15.137  "zone_append": false,
00:12:15.137  "compare": false,
00:12:15.137  "compare_and_write": false,
00:12:15.137  "abort": true,
00:12:15.137  "seek_hole": false,
00:12:15.137  "seek_data": false,
00:12:15.137  "copy": true,
00:12:15.137  "nvme_iov_md": false
00:12:15.137  },
00:12:15.137  "memory_domains": [
00:12:15.137  {
00:12:15.137  "dma_device_id": "system",
00:12:15.137  "dma_device_type": 1
00:12:15.137  },
00:12:15.137  {
00:12:15.137  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:15.137  "dma_device_type": 2
00:12:15.137  }
00:12:15.137  ],
00:12:15.137  "driver_specific": {}
00:12:15.137  }
00:12:15.137  ]
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.137   11:33:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.137  BaseBdev4
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.137   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.137  [
00:12:15.137  {
00:12:15.137  "name": "BaseBdev4",
00:12:15.137  "aliases": [
00:12:15.137  "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd"
00:12:15.137  ],
00:12:15.137  "product_name": "Malloc disk",
00:12:15.138  "block_size": 512,
00:12:15.138  "num_blocks": 65536,
00:12:15.138  "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:15.138  "assigned_rate_limits": {
00:12:15.138  "rw_ios_per_sec": 0,
00:12:15.138  "rw_mbytes_per_sec": 0,
00:12:15.138  "r_mbytes_per_sec": 0,
00:12:15.138  "w_mbytes_per_sec": 0
00:12:15.138  },
00:12:15.138  "claimed": false,
00:12:15.138  "zoned": false,
00:12:15.138  "supported_io_types": {
00:12:15.138  "read": true,
00:12:15.138  "write": true,
00:12:15.138  "unmap": true,
00:12:15.138  "flush": true,
00:12:15.138  "reset": true,
00:12:15.138  "nvme_admin": false,
00:12:15.138  "nvme_io": false,
00:12:15.138  "nvme_io_md": false,
00:12:15.138  "write_zeroes": true,
00:12:15.138  "zcopy": true,
00:12:15.138  "get_zone_info": false,
00:12:15.138  "zone_management": false,
00:12:15.138  "zone_append": false,
00:12:15.138  "compare": false,
00:12:15.138  "compare_and_write": false,
00:12:15.138  "abort": true,
00:12:15.138  "seek_hole": false,
00:12:15.138  "seek_data": false,
00:12:15.138  "copy": true,
00:12:15.138  "nvme_iov_md": false
00:12:15.138  },
00:12:15.138  "memory_domains": [
00:12:15.138  {
00:12:15.138  "dma_device_id": "system",
00:12:15.138  "dma_device_type": 1
00:12:15.138  },
00:12:15.138  {
00:12:15.138  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:15.138  "dma_device_type": 2
00:12:15.138  }
00:12:15.138  ],
00:12:15.138  "driver_specific": {}
00:12:15.138  }
00:12:15.138  ]
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.138  [2024-12-16 11:33:41.054346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:12:15.138  [2024-12-16 11:33:41.054414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:12:15.138  [2024-12-16 11:33:41.054443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:15.138  [2024-12-16 11:33:41.056714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:15.138  [2024-12-16 11:33:41.056777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:15.138    11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:15.138    11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:15.138    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.138    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.138    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:15.138    "name": "Existed_Raid",
00:12:15.138    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:15.138    "strip_size_kb": 0,
00:12:15.138    "state": "configuring",
00:12:15.138    "raid_level": "raid1",
00:12:15.138    "superblock": true,
00:12:15.138    "num_base_bdevs": 4,
00:12:15.138    "num_base_bdevs_discovered": 3,
00:12:15.138    "num_base_bdevs_operational": 4,
00:12:15.138    "base_bdevs_list": [
00:12:15.138      {
00:12:15.138        "name": "BaseBdev1",
00:12:15.138        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:15.138        "is_configured": false,
00:12:15.138        "data_offset": 0,
00:12:15.138        "data_size": 0
00:12:15.138      },
00:12:15.138      {
00:12:15.138        "name": "BaseBdev2",
00:12:15.138        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:15.138        "is_configured": true,
00:12:15.138        "data_offset": 2048,
00:12:15.138        "data_size": 63488
00:12:15.138      },
00:12:15.138      {
00:12:15.138        "name": "BaseBdev3",
00:12:15.138        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:15.138        "is_configured": true,
00:12:15.138        "data_offset": 2048,
00:12:15.138        "data_size": 63488
00:12:15.138      },
00:12:15.138      {
00:12:15.138        "name": "BaseBdev4",
00:12:15.138        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:15.138        "is_configured": true,
00:12:15.138        "data_offset": 2048,
00:12:15.138        "data_size": 63488
00:12:15.138      }
00:12:15.138    ]
00:12:15.138  }'
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:15.138   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.706  [2024-12-16 11:33:41.497583] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:15.706    11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:15.706    11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:15.706    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.706    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.706    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:15.706    "name": "Existed_Raid",
00:12:15.706    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:15.706    "strip_size_kb": 0,
00:12:15.706    "state": "configuring",
00:12:15.706    "raid_level": "raid1",
00:12:15.706    "superblock": true,
00:12:15.706    "num_base_bdevs": 4,
00:12:15.706    "num_base_bdevs_discovered": 2,
00:12:15.706    "num_base_bdevs_operational": 4,
00:12:15.706    "base_bdevs_list": [
00:12:15.706      {
00:12:15.706        "name": "BaseBdev1",
00:12:15.706        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:15.706        "is_configured": false,
00:12:15.706        "data_offset": 0,
00:12:15.706        "data_size": 0
00:12:15.706      },
00:12:15.706      {
00:12:15.706        "name": null,
00:12:15.706        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:15.706        "is_configured": false,
00:12:15.706        "data_offset": 0,
00:12:15.706        "data_size": 63488
00:12:15.706      },
00:12:15.706      {
00:12:15.706        "name": "BaseBdev3",
00:12:15.706        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:15.706        "is_configured": true,
00:12:15.706        "data_offset": 2048,
00:12:15.706        "data_size": 63488
00:12:15.706      },
00:12:15.706      {
00:12:15.706        "name": "BaseBdev4",
00:12:15.706        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:15.706        "is_configured": true,
00:12:15.706        "data_offset": 2048,
00:12:15.706        "data_size": 63488
00:12:15.706      }
00:12:15.706    ]
00:12:15.706  }'
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:15.706   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.966    11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:15.966    11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:12:15.966    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.966    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.966    11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.966  [2024-12-16 11:33:41.988137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:15.966  BaseBdev1
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.966   11:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:15.966  [
00:12:15.966  {
00:12:15.966  "name": "BaseBdev1",
00:12:15.966  "aliases": [
00:12:15.966  "6478ad6f-8e58-4a0d-b35d-d2e71620dccf"
00:12:15.966  ],
00:12:15.966  "product_name": "Malloc disk",
00:12:15.966  "block_size": 512,
00:12:15.966  "num_blocks": 65536,
00:12:15.966  "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:15.966  "assigned_rate_limits": {
00:12:15.966  "rw_ios_per_sec": 0,
00:12:15.966  "rw_mbytes_per_sec": 0,
00:12:15.966  "r_mbytes_per_sec": 0,
00:12:15.966  "w_mbytes_per_sec": 0
00:12:15.966  },
00:12:15.966  "claimed": true,
00:12:15.966  "claim_type": "exclusive_write",
00:12:15.966  "zoned": false,
00:12:15.966  "supported_io_types": {
00:12:15.966  "read": true,
00:12:15.966  "write": true,
00:12:15.966  "unmap": true,
00:12:15.966  "flush": true,
00:12:15.966  "reset": true,
00:12:15.966  "nvme_admin": false,
00:12:15.966  "nvme_io": false,
00:12:15.966  "nvme_io_md": false,
00:12:15.966  "write_zeroes": true,
00:12:15.966  "zcopy": true,
00:12:15.966  "get_zone_info": false,
00:12:15.966  "zone_management": false,
00:12:15.966  "zone_append": false,
00:12:15.966  "compare": false,
00:12:15.966  "compare_and_write": false,
00:12:15.966  "abort": true,
00:12:15.966  "seek_hole": false,
00:12:15.966  "seek_data": false,
00:12:15.966  "copy": true,
00:12:15.966  "nvme_iov_md": false
00:12:15.966  },
00:12:15.966  "memory_domains": [
00:12:15.966  {
00:12:15.966  "dma_device_id": "system",
00:12:15.966  "dma_device_type": 1
00:12:15.966  },
00:12:15.966  {
00:12:15.966  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:15.966  "dma_device_type": 2
00:12:15.966  }
00:12:15.966  ],
00:12:15.966  "driver_specific": {}
00:12:15.966  }
00:12:15.966  ]
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:15.966   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:16.224    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:16.224    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:16.224    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:16.224    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.224    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:16.224   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:16.224    "name": "Existed_Raid",
00:12:16.224    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:16.224    "strip_size_kb": 0,
00:12:16.224    "state": "configuring",
00:12:16.224    "raid_level": "raid1",
00:12:16.224    "superblock": true,
00:12:16.224    "num_base_bdevs": 4,
00:12:16.224    "num_base_bdevs_discovered": 3,
00:12:16.224    "num_base_bdevs_operational": 4,
00:12:16.224    "base_bdevs_list": [
00:12:16.224      {
00:12:16.224        "name": "BaseBdev1",
00:12:16.224        "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:16.224        "is_configured": true,
00:12:16.224        "data_offset": 2048,
00:12:16.224        "data_size": 63488
00:12:16.224      },
00:12:16.224      {
00:12:16.224        "name": null,
00:12:16.224        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:16.224        "is_configured": false,
00:12:16.224        "data_offset": 0,
00:12:16.224        "data_size": 63488
00:12:16.224      },
00:12:16.224      {
00:12:16.224        "name": "BaseBdev3",
00:12:16.224        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:16.224        "is_configured": true,
00:12:16.224        "data_offset": 2048,
00:12:16.224        "data_size": 63488
00:12:16.224      },
00:12:16.224      {
00:12:16.224        "name": "BaseBdev4",
00:12:16.224        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:16.224        "is_configured": true,
00:12:16.224        "data_offset": 2048,
00:12:16.224        "data_size": 63488
00:12:16.224      }
00:12:16.224    ]
00:12:16.224  }'
00:12:16.224   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:16.224   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.483  [2024-12-16 11:33:42.531422] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:16.483   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:16.483    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.741    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:16.741   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:16.741    "name": "Existed_Raid",
00:12:16.741    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:16.741    "strip_size_kb": 0,
00:12:16.741    "state": "configuring",
00:12:16.741    "raid_level": "raid1",
00:12:16.741    "superblock": true,
00:12:16.741    "num_base_bdevs": 4,
00:12:16.741    "num_base_bdevs_discovered": 2,
00:12:16.741    "num_base_bdevs_operational": 4,
00:12:16.741    "base_bdevs_list": [
00:12:16.741      {
00:12:16.741        "name": "BaseBdev1",
00:12:16.741        "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:16.741        "is_configured": true,
00:12:16.741        "data_offset": 2048,
00:12:16.741        "data_size": 63488
00:12:16.741      },
00:12:16.741      {
00:12:16.741        "name": null,
00:12:16.741        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:16.741        "is_configured": false,
00:12:16.741        "data_offset": 0,
00:12:16.741        "data_size": 63488
00:12:16.741      },
00:12:16.741      {
00:12:16.741        "name": null,
00:12:16.741        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:16.741        "is_configured": false,
00:12:16.741        "data_offset": 0,
00:12:16.741        "data_size": 63488
00:12:16.741      },
00:12:16.741      {
00:12:16.741        "name": "BaseBdev4",
00:12:16.741        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:16.741        "is_configured": true,
00:12:16.741        "data_offset": 2048,
00:12:16.741        "data_size": 63488
00:12:16.741      }
00:12:16.741    ]
00:12:16.741  }'
00:12:16.741   11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:16.741   11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.999    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:16.999    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:16.999    11:33:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.999    11:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:12:16.999    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:16.999   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:12:16.999   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:12:16.999   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:16.999   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:16.999  [2024-12-16 11:33:43.046733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:17.000   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:17.000    11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:17.000    11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:17.000    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:17.000    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:17.258    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:17.258   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:17.258    "name": "Existed_Raid",
00:12:17.258    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:17.258    "strip_size_kb": 0,
00:12:17.258    "state": "configuring",
00:12:17.258    "raid_level": "raid1",
00:12:17.258    "superblock": true,
00:12:17.258    "num_base_bdevs": 4,
00:12:17.258    "num_base_bdevs_discovered": 3,
00:12:17.258    "num_base_bdevs_operational": 4,
00:12:17.258    "base_bdevs_list": [
00:12:17.258      {
00:12:17.258        "name": "BaseBdev1",
00:12:17.258        "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:17.258        "is_configured": true,
00:12:17.258        "data_offset": 2048,
00:12:17.258        "data_size": 63488
00:12:17.258      },
00:12:17.258      {
00:12:17.258        "name": null,
00:12:17.258        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:17.258        "is_configured": false,
00:12:17.258        "data_offset": 0,
00:12:17.258        "data_size": 63488
00:12:17.258      },
00:12:17.258      {
00:12:17.258        "name": "BaseBdev3",
00:12:17.258        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:17.258        "is_configured": true,
00:12:17.258        "data_offset": 2048,
00:12:17.258        "data_size": 63488
00:12:17.258      },
00:12:17.258      {
00:12:17.258        "name": "BaseBdev4",
00:12:17.258        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:17.258        "is_configured": true,
00:12:17.258        "data_offset": 2048,
00:12:17.258        "data_size": 63488
00:12:17.258      }
00:12:17.258    ]
00:12:17.258  }'
00:12:17.258   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:17.258   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:17.516    11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:12:17.516    11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:17.516    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:17.516    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:17.516    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:17.774  [2024-12-16 11:33:43.593838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:17.774    11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:17.774    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:17.774    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:17.774    11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:17.774    11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:17.774    "name": "Existed_Raid",
00:12:17.774    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:17.774    "strip_size_kb": 0,
00:12:17.774    "state": "configuring",
00:12:17.774    "raid_level": "raid1",
00:12:17.774    "superblock": true,
00:12:17.774    "num_base_bdevs": 4,
00:12:17.774    "num_base_bdevs_discovered": 2,
00:12:17.774    "num_base_bdevs_operational": 4,
00:12:17.774    "base_bdevs_list": [
00:12:17.774      {
00:12:17.774        "name": null,
00:12:17.774        "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:17.774        "is_configured": false,
00:12:17.774        "data_offset": 0,
00:12:17.774        "data_size": 63488
00:12:17.774      },
00:12:17.774      {
00:12:17.774        "name": null,
00:12:17.774        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:17.774        "is_configured": false,
00:12:17.774        "data_offset": 0,
00:12:17.774        "data_size": 63488
00:12:17.774      },
00:12:17.774      {
00:12:17.774        "name": "BaseBdev3",
00:12:17.774        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:17.774        "is_configured": true,
00:12:17.774        "data_offset": 2048,
00:12:17.774        "data_size": 63488
00:12:17.774      },
00:12:17.774      {
00:12:17.774        "name": "BaseBdev4",
00:12:17.774        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:17.774        "is_configured": true,
00:12:17.774        "data_offset": 2048,
00:12:17.774        "data_size": 63488
00:12:17.774      }
00:12:17.774    ]
00:12:17.774  }'
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:17.774   11:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.032    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:12:18.032    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:18.032    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.032    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.032    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.032   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:12:18.032   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:12:18.032   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.032   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.032  [2024-12-16 11:33:44.096186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:18.291    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:18.291    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.291    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.291    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:18.291    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:18.291    "name": "Existed_Raid",
00:12:18.291    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:18.291    "strip_size_kb": 0,
00:12:18.291    "state": "configuring",
00:12:18.291    "raid_level": "raid1",
00:12:18.291    "superblock": true,
00:12:18.291    "num_base_bdevs": 4,
00:12:18.291    "num_base_bdevs_discovered": 3,
00:12:18.291    "num_base_bdevs_operational": 4,
00:12:18.291    "base_bdevs_list": [
00:12:18.291      {
00:12:18.291        "name": null,
00:12:18.291        "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:18.291        "is_configured": false,
00:12:18.291        "data_offset": 0,
00:12:18.291        "data_size": 63488
00:12:18.291      },
00:12:18.291      {
00:12:18.291        "name": "BaseBdev2",
00:12:18.291        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:18.291        "is_configured": true,
00:12:18.291        "data_offset": 2048,
00:12:18.291        "data_size": 63488
00:12:18.291      },
00:12:18.291      {
00:12:18.291        "name": "BaseBdev3",
00:12:18.291        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:18.291        "is_configured": true,
00:12:18.291        "data_offset": 2048,
00:12:18.291        "data_size": 63488
00:12:18.291      },
00:12:18.291      {
00:12:18.291        "name": "BaseBdev4",
00:12:18.291        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:18.291        "is_configured": true,
00:12:18.291        "data_offset": 2048,
00:12:18.291        "data_size": 63488
00:12:18.291      }
00:12:18.291    ]
00:12:18.291  }'
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:18.291   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.549   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.549    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.808    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6478ad6f-8e58-4a0d-b35d-d2e71620dccf
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.808  NewBaseBdev
00:12:18.808  [2024-12-16 11:33:44.658840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:12:18.808  [2024-12-16 11:33:44.659087] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:12:18.808  [2024-12-16 11:33:44.659107] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:18.808  [2024-12-16 11:33:44.659406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:12:18.808  [2024-12-16 11:33:44.659592] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:12:18.808  [2024-12-16 11:33:44.659604] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:12:18.808  [2024-12-16 11:33:44.659722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.808   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.808  [
00:12:18.808  {
00:12:18.808  "name": "NewBaseBdev",
00:12:18.808  "aliases": [
00:12:18.808  "6478ad6f-8e58-4a0d-b35d-d2e71620dccf"
00:12:18.808  ],
00:12:18.808  "product_name": "Malloc disk",
00:12:18.808  "block_size": 512,
00:12:18.808  "num_blocks": 65536,
00:12:18.808  "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:18.808  "assigned_rate_limits": {
00:12:18.808  "rw_ios_per_sec": 0,
00:12:18.808  "rw_mbytes_per_sec": 0,
00:12:18.808  "r_mbytes_per_sec": 0,
00:12:18.808  "w_mbytes_per_sec": 0
00:12:18.808  },
00:12:18.808  "claimed": true,
00:12:18.808  "claim_type": "exclusive_write",
00:12:18.808  "zoned": false,
00:12:18.808  "supported_io_types": {
00:12:18.808  "read": true,
00:12:18.808  "write": true,
00:12:18.808  "unmap": true,
00:12:18.808  "flush": true,
00:12:18.808  "reset": true,
00:12:18.808  "nvme_admin": false,
00:12:18.808  "nvme_io": false,
00:12:18.808  "nvme_io_md": false,
00:12:18.808  "write_zeroes": true,
00:12:18.808  "zcopy": true,
00:12:18.808  "get_zone_info": false,
00:12:18.808  "zone_management": false,
00:12:18.808  "zone_append": false,
00:12:18.808  "compare": false,
00:12:18.808  "compare_and_write": false,
00:12:18.808  "abort": true,
00:12:18.808  "seek_hole": false,
00:12:18.808  "seek_data": false,
00:12:18.808  "copy": true,
00:12:18.808  "nvme_iov_md": false
00:12:18.808  },
00:12:18.808  "memory_domains": [
00:12:18.808  {
00:12:18.808  "dma_device_id": "system",
00:12:18.808  "dma_device_type": 1
00:12:18.808  },
00:12:18.808  {
00:12:18.808  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:18.808  "dma_device_type": 2
00:12:18.809  }
00:12:18.809  ],
00:12:18.809  "driver_specific": {}
00:12:18.809  }
00:12:18.809  ]
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:18.809    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:12:18.809    11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:18.809    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:18.809    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:18.809    11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:18.809    "name": "Existed_Raid",
00:12:18.809    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:18.809    "strip_size_kb": 0,
00:12:18.809    "state": "online",
00:12:18.809    "raid_level": "raid1",
00:12:18.809    "superblock": true,
00:12:18.809    "num_base_bdevs": 4,
00:12:18.809    "num_base_bdevs_discovered": 4,
00:12:18.809    "num_base_bdevs_operational": 4,
00:12:18.809    "base_bdevs_list": [
00:12:18.809      {
00:12:18.809        "name": "NewBaseBdev",
00:12:18.809        "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:18.809        "is_configured": true,
00:12:18.809        "data_offset": 2048,
00:12:18.809        "data_size": 63488
00:12:18.809      },
00:12:18.809      {
00:12:18.809        "name": "BaseBdev2",
00:12:18.809        "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:18.809        "is_configured": true,
00:12:18.809        "data_offset": 2048,
00:12:18.809        "data_size": 63488
00:12:18.809      },
00:12:18.809      {
00:12:18.809        "name": "BaseBdev3",
00:12:18.809        "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:18.809        "is_configured": true,
00:12:18.809        "data_offset": 2048,
00:12:18.809        "data_size": 63488
00:12:18.809      },
00:12:18.809      {
00:12:18.809        "name": "BaseBdev4",
00:12:18.809        "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:18.809        "is_configured": true,
00:12:18.809        "data_offset": 2048,
00:12:18.809        "data_size": 63488
00:12:18.809      }
00:12:18.809    ]
00:12:18.809  }'
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:18.809   11:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.377   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:12:19.377   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:12:19.377   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:12:19.377   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:12:19.377   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:12:19.377   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:12:19.377    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:12:19.377    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:19.377    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:12:19.377    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.377  [2024-12-16 11:33:45.158452] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:19.377    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:19.377   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:12:19.377    "name": "Existed_Raid",
00:12:19.377    "aliases": [
00:12:19.377      "dfda7f70-249c-4d48-83bb-e9169a6ae281"
00:12:19.377    ],
00:12:19.377    "product_name": "Raid Volume",
00:12:19.377    "block_size": 512,
00:12:19.377    "num_blocks": 63488,
00:12:19.377    "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:19.377    "assigned_rate_limits": {
00:12:19.377      "rw_ios_per_sec": 0,
00:12:19.377      "rw_mbytes_per_sec": 0,
00:12:19.377      "r_mbytes_per_sec": 0,
00:12:19.377      "w_mbytes_per_sec": 0
00:12:19.377    },
00:12:19.377    "claimed": false,
00:12:19.377    "zoned": false,
00:12:19.377    "supported_io_types": {
00:12:19.377      "read": true,
00:12:19.377      "write": true,
00:12:19.377      "unmap": false,
00:12:19.377      "flush": false,
00:12:19.377      "reset": true,
00:12:19.377      "nvme_admin": false,
00:12:19.377      "nvme_io": false,
00:12:19.377      "nvme_io_md": false,
00:12:19.377      "write_zeroes": true,
00:12:19.377      "zcopy": false,
00:12:19.377      "get_zone_info": false,
00:12:19.377      "zone_management": false,
00:12:19.377      "zone_append": false,
00:12:19.377      "compare": false,
00:12:19.377      "compare_and_write": false,
00:12:19.377      "abort": false,
00:12:19.377      "seek_hole": false,
00:12:19.377      "seek_data": false,
00:12:19.377      "copy": false,
00:12:19.377      "nvme_iov_md": false
00:12:19.377    },
00:12:19.377    "memory_domains": [
00:12:19.377      {
00:12:19.377        "dma_device_id": "system",
00:12:19.377        "dma_device_type": 1
00:12:19.377      },
00:12:19.377      {
00:12:19.377        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:19.377        "dma_device_type": 2
00:12:19.377      },
00:12:19.377      {
00:12:19.377        "dma_device_id": "system",
00:12:19.377        "dma_device_type": 1
00:12:19.377      },
00:12:19.377      {
00:12:19.377        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:19.377        "dma_device_type": 2
00:12:19.377      },
00:12:19.377      {
00:12:19.377        "dma_device_id": "system",
00:12:19.377        "dma_device_type": 1
00:12:19.377      },
00:12:19.377      {
00:12:19.377        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:19.377        "dma_device_type": 2
00:12:19.377      },
00:12:19.377      {
00:12:19.377        "dma_device_id": "system",
00:12:19.377        "dma_device_type": 1
00:12:19.377      },
00:12:19.377      {
00:12:19.378        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:19.378        "dma_device_type": 2
00:12:19.378      }
00:12:19.378    ],
00:12:19.378    "driver_specific": {
00:12:19.378      "raid": {
00:12:19.378        "uuid": "dfda7f70-249c-4d48-83bb-e9169a6ae281",
00:12:19.378        "strip_size_kb": 0,
00:12:19.378        "state": "online",
00:12:19.378        "raid_level": "raid1",
00:12:19.378        "superblock": true,
00:12:19.378        "num_base_bdevs": 4,
00:12:19.378        "num_base_bdevs_discovered": 4,
00:12:19.378        "num_base_bdevs_operational": 4,
00:12:19.378        "base_bdevs_list": [
00:12:19.378          {
00:12:19.378            "name": "NewBaseBdev",
00:12:19.378            "uuid": "6478ad6f-8e58-4a0d-b35d-d2e71620dccf",
00:12:19.378            "is_configured": true,
00:12:19.378            "data_offset": 2048,
00:12:19.378            "data_size": 63488
00:12:19.378          },
00:12:19.378          {
00:12:19.378            "name": "BaseBdev2",
00:12:19.378            "uuid": "81a0a11e-3ba1-44e4-bee7-e1089b091d80",
00:12:19.378            "is_configured": true,
00:12:19.378            "data_offset": 2048,
00:12:19.378            "data_size": 63488
00:12:19.378          },
00:12:19.378          {
00:12:19.378            "name": "BaseBdev3",
00:12:19.378            "uuid": "354b6c21-c91e-464f-91cf-463ebf827525",
00:12:19.378            "is_configured": true,
00:12:19.378            "data_offset": 2048,
00:12:19.378            "data_size": 63488
00:12:19.378          },
00:12:19.378          {
00:12:19.378            "name": "BaseBdev4",
00:12:19.378            "uuid": "6dd5d9fa-e1d9-47d4-a8c8-2b1be8ee0fdd",
00:12:19.378            "is_configured": true,
00:12:19.378            "data_offset": 2048,
00:12:19.378            "data_size": 63488
00:12:19.378          }
00:12:19.378        ]
00:12:19.378      }
00:12:19.378    }
00:12:19.378  }'
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:12:19.378  BaseBdev2
00:12:19.378  BaseBdev3
00:12:19.378  BaseBdev4'
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:19.378   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.378    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.636  [2024-12-16 11:33:45.449573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:12:19.636  [2024-12-16 11:33:45.449691] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:19.636  [2024-12-16 11:33:45.449812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:19.636  [2024-12-16 11:33:45.450144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:19.636  [2024-12-16 11:33:45.450216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84963
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84963 ']'
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84963
00:12:19.636    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:12:19.636    11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84963
00:12:19.636  killing process with pid 84963
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84963'
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84963
00:12:19.636  [2024-12-16 11:33:45.486905] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:12:19.636   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84963
00:12:19.636  [2024-12-16 11:33:45.530172] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:12:19.895   11:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:12:19.895  
00:12:19.895  real	0m10.014s
00:12:19.895  user	0m17.150s
00:12:19.895  sys	0m2.049s
00:12:19.895   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:12:19.895   11:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:19.895  ************************************
00:12:19.895  END TEST raid_state_function_test_sb
00:12:19.895  ************************************
00:12:19.895   11:33:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4
00:12:19.895   11:33:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:12:19.895   11:33:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:12:19.895   11:33:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:12:19.895  ************************************
00:12:19.895  START TEST raid_superblock_test
00:12:19.895  ************************************
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']'
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85617
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85617
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85617 ']'
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:19.895  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:12:19.895   11:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:19.895  [2024-12-16 11:33:45.941999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:12:19.895  [2024-12-16 11:33:45.942244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85617 ]
00:12:20.153  [2024-12-16 11:33:46.109804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:20.153  [2024-12-16 11:33:46.173806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:12:20.411  [2024-12-16 11:33:46.227867] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:20.411  [2024-12-16 11:33:46.227914] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.979  malloc1
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.979  [2024-12-16 11:33:46.858575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:12:20.979  [2024-12-16 11:33:46.858710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:20.979  [2024-12-16 11:33:46.858756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:12:20.979  [2024-12-16 11:33:46.858800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:20.979  [2024-12-16 11:33:46.861322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:20.979  [2024-12-16 11:33:46.861417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:12:20.979  pt1
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.979  malloc2
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.979   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.979  [2024-12-16 11:33:46.902758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:12:20.979  [2024-12-16 11:33:46.902879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:20.979  [2024-12-16 11:33:46.902935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:12:20.979  [2024-12-16 11:33:46.902976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:20.979  [2024-12-16 11:33:46.905509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:20.979  [2024-12-16 11:33:46.905614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:12:20.980  pt2
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.980  malloc3
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.980  [2024-12-16 11:33:46.936408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:12:20.980  [2024-12-16 11:33:46.936587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:20.980  [2024-12-16 11:33:46.936631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:12:20.980  [2024-12-16 11:33:46.936673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:20.980  [2024-12-16 11:33:46.939179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:20.980  [2024-12-16 11:33:46.939291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:12:20.980  pt3
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.980  malloc4
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.980  [2024-12-16 11:33:46.969908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:12:20.980  [2024-12-16 11:33:46.970050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:20.980  [2024-12-16 11:33:46.970093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:12:20.980  [2024-12-16 11:33:46.970109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:20.980  [2024-12-16 11:33:46.972589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:20.980  [2024-12-16 11:33:46.972634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:12:20.980  pt4
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.980  [2024-12-16 11:33:46.982037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:12:20.980  [2024-12-16 11:33:46.984202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:12:20.980  [2024-12-16 11:33:46.984271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:12:20.980  [2024-12-16 11:33:46.984320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:12:20.980  [2024-12-16 11:33:46.984517] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:12:20.980  [2024-12-16 11:33:46.984553] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:20.980  [2024-12-16 11:33:46.984896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:12:20.980  [2024-12-16 11:33:46.985156] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:12:20.980  [2024-12-16 11:33:46.985174] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:12:20.980  [2024-12-16 11:33:46.985329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:20.980   11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:20.980    11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:20.980    11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:20.980    11:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:20.980    11:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:20.980    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:20.980   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:20.980    "name": "raid_bdev1",
00:12:20.980    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:20.980    "strip_size_kb": 0,
00:12:20.980    "state": "online",
00:12:20.980    "raid_level": "raid1",
00:12:20.980    "superblock": true,
00:12:20.980    "num_base_bdevs": 4,
00:12:20.980    "num_base_bdevs_discovered": 4,
00:12:20.980    "num_base_bdevs_operational": 4,
00:12:20.980    "base_bdevs_list": [
00:12:20.980      {
00:12:20.980        "name": "pt1",
00:12:20.980        "uuid": "00000000-0000-0000-0000-000000000001",
00:12:20.980        "is_configured": true,
00:12:20.980        "data_offset": 2048,
00:12:20.980        "data_size": 63488
00:12:20.980      },
00:12:20.980      {
00:12:20.980        "name": "pt2",
00:12:20.980        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:20.980        "is_configured": true,
00:12:20.980        "data_offset": 2048,
00:12:20.980        "data_size": 63488
00:12:20.980      },
00:12:20.980      {
00:12:20.980        "name": "pt3",
00:12:20.980        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:20.980        "is_configured": true,
00:12:20.980        "data_offset": 2048,
00:12:20.980        "data_size": 63488
00:12:20.980      },
00:12:20.980      {
00:12:20.980        "name": "pt4",
00:12:20.980        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:20.980        "is_configured": true,
00:12:20.980        "data_offset": 2048,
00:12:20.980        "data_size": 63488
00:12:20.980      }
00:12:20.980    ]
00:12:20.980  }'
00:12:20.980   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:20.980   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.549   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:12:21.549   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:12:21.549   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:12:21.549   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:12:21.549   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:12:21.549   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:12:21.549    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:12:21.549    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:12:21.549    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.549    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.549  [2024-12-16 11:33:47.439621] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:21.549    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:21.549   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:12:21.549    "name": "raid_bdev1",
00:12:21.549    "aliases": [
00:12:21.549      "402d7850-9e58-47ef-b979-3c6b0f7ac20b"
00:12:21.549    ],
00:12:21.549    "product_name": "Raid Volume",
00:12:21.549    "block_size": 512,
00:12:21.549    "num_blocks": 63488,
00:12:21.549    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:21.549    "assigned_rate_limits": {
00:12:21.549      "rw_ios_per_sec": 0,
00:12:21.549      "rw_mbytes_per_sec": 0,
00:12:21.549      "r_mbytes_per_sec": 0,
00:12:21.549      "w_mbytes_per_sec": 0
00:12:21.549    },
00:12:21.549    "claimed": false,
00:12:21.549    "zoned": false,
00:12:21.549    "supported_io_types": {
00:12:21.549      "read": true,
00:12:21.549      "write": true,
00:12:21.549      "unmap": false,
00:12:21.549      "flush": false,
00:12:21.549      "reset": true,
00:12:21.549      "nvme_admin": false,
00:12:21.549      "nvme_io": false,
00:12:21.549      "nvme_io_md": false,
00:12:21.549      "write_zeroes": true,
00:12:21.549      "zcopy": false,
00:12:21.549      "get_zone_info": false,
00:12:21.549      "zone_management": false,
00:12:21.549      "zone_append": false,
00:12:21.549      "compare": false,
00:12:21.549      "compare_and_write": false,
00:12:21.549      "abort": false,
00:12:21.549      "seek_hole": false,
00:12:21.549      "seek_data": false,
00:12:21.549      "copy": false,
00:12:21.549      "nvme_iov_md": false
00:12:21.549    },
00:12:21.549    "memory_domains": [
00:12:21.549      {
00:12:21.549        "dma_device_id": "system",
00:12:21.549        "dma_device_type": 1
00:12:21.549      },
00:12:21.549      {
00:12:21.549        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:21.549        "dma_device_type": 2
00:12:21.549      },
00:12:21.549      {
00:12:21.549        "dma_device_id": "system",
00:12:21.550        "dma_device_type": 1
00:12:21.550      },
00:12:21.550      {
00:12:21.550        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:21.550        "dma_device_type": 2
00:12:21.550      },
00:12:21.550      {
00:12:21.550        "dma_device_id": "system",
00:12:21.550        "dma_device_type": 1
00:12:21.550      },
00:12:21.550      {
00:12:21.550        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:21.550        "dma_device_type": 2
00:12:21.550      },
00:12:21.550      {
00:12:21.550        "dma_device_id": "system",
00:12:21.550        "dma_device_type": 1
00:12:21.550      },
00:12:21.550      {
00:12:21.550        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:21.550        "dma_device_type": 2
00:12:21.550      }
00:12:21.550    ],
00:12:21.550    "driver_specific": {
00:12:21.550      "raid": {
00:12:21.550        "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:21.550        "strip_size_kb": 0,
00:12:21.550        "state": "online",
00:12:21.550        "raid_level": "raid1",
00:12:21.550        "superblock": true,
00:12:21.550        "num_base_bdevs": 4,
00:12:21.550        "num_base_bdevs_discovered": 4,
00:12:21.550        "num_base_bdevs_operational": 4,
00:12:21.550        "base_bdevs_list": [
00:12:21.550          {
00:12:21.550            "name": "pt1",
00:12:21.550            "uuid": "00000000-0000-0000-0000-000000000001",
00:12:21.550            "is_configured": true,
00:12:21.550            "data_offset": 2048,
00:12:21.550            "data_size": 63488
00:12:21.550          },
00:12:21.550          {
00:12:21.550            "name": "pt2",
00:12:21.550            "uuid": "00000000-0000-0000-0000-000000000002",
00:12:21.550            "is_configured": true,
00:12:21.550            "data_offset": 2048,
00:12:21.550            "data_size": 63488
00:12:21.550          },
00:12:21.550          {
00:12:21.550            "name": "pt3",
00:12:21.550            "uuid": "00000000-0000-0000-0000-000000000003",
00:12:21.550            "is_configured": true,
00:12:21.550            "data_offset": 2048,
00:12:21.550            "data_size": 63488
00:12:21.550          },
00:12:21.550          {
00:12:21.550            "name": "pt4",
00:12:21.550            "uuid": "00000000-0000-0000-0000-000000000004",
00:12:21.550            "is_configured": true,
00:12:21.550            "data_offset": 2048,
00:12:21.550            "data_size": 63488
00:12:21.550          }
00:12:21.550        ]
00:12:21.550      }
00:12:21.550    }
00:12:21.550  }'
00:12:21.550    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:12:21.550   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:12:21.550  pt2
00:12:21.550  pt3
00:12:21.550  pt4'
00:12:21.550    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:21.550   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:12:21.550   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:21.550    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:21.550    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:12:21.550    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.550    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.550    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.809  [2024-12-16 11:33:47.792553] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=402d7850-9e58-47ef-b979-3c6b0f7ac20b
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 402d7850-9e58-47ef-b979-3c6b0f7ac20b ']'
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.809  [2024-12-16 11:33:47.840286] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:21.809  [2024-12-16 11:33:47.840404] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:21.809  [2024-12-16 11:33:47.840548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:21.809  [2024-12-16 11:33:47.840688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:21.809  [2024-12-16 11:33:47.840741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:12:21.809   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:21.809    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:12:22.069    11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:12:22.069    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:12:22.069    11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069   11:33:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069  [2024-12-16 11:33:48.000755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:12:22.069  [2024-12-16 11:33:48.003042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:12:22.069  [2024-12-16 11:33:48.003158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:12:22.069  [2024-12-16 11:33:48.003238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:12:22.069  [2024-12-16 11:33:48.003336] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:12:22.069  [2024-12-16 11:33:48.003434] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:12:22.069  [2024-12-16 11:33:48.003497] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:12:22.069  [2024-12-16 11:33:48.003578] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4
00:12:22.069  [2024-12-16 11:33:48.003633] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:22.069  [2024-12-16 11:33:48.003667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:12:22.069  request:
00:12:22.069  {
00:12:22.069  "name": "raid_bdev1",
00:12:22.069  "raid_level": "raid1",
00:12:22.069  "base_bdevs": [
00:12:22.069  "malloc1",
00:12:22.069  "malloc2",
00:12:22.069  "malloc3",
00:12:22.069  "malloc4"
00:12:22.069  ],
00:12:22.069  "superblock": false,
00:12:22.069  "method": "bdev_raid_create",
00:12:22.069  "req_id": 1
00:12:22.069  }
00:12:22.069  Got JSON-RPC error response
00:12:22.069  response:
00:12:22.069  {
00:12:22.069  "code": -17,
00:12:22.069  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:12:22.069  }
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069  [2024-12-16 11:33:48.068899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:12:22.069  [2024-12-16 11:33:48.068979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:22.069  [2024-12-16 11:33:48.069005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:12:22.069  [2024-12-16 11:33:48.069015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:22.069  [2024-12-16 11:33:48.071489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:22.069  [2024-12-16 11:33:48.071616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:12:22.069  [2024-12-16 11:33:48.071727] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:12:22.069  [2024-12-16 11:33:48.071780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:12:22.069  pt1
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.069    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.069   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:22.069    "name": "raid_bdev1",
00:12:22.069    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:22.069    "strip_size_kb": 0,
00:12:22.069    "state": "configuring",
00:12:22.069    "raid_level": "raid1",
00:12:22.069    "superblock": true,
00:12:22.069    "num_base_bdevs": 4,
00:12:22.069    "num_base_bdevs_discovered": 1,
00:12:22.069    "num_base_bdevs_operational": 4,
00:12:22.069    "base_bdevs_list": [
00:12:22.069      {
00:12:22.069        "name": "pt1",
00:12:22.069        "uuid": "00000000-0000-0000-0000-000000000001",
00:12:22.069        "is_configured": true,
00:12:22.069        "data_offset": 2048,
00:12:22.069        "data_size": 63488
00:12:22.069      },
00:12:22.069      {
00:12:22.069        "name": null,
00:12:22.069        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:22.069        "is_configured": false,
00:12:22.069        "data_offset": 2048,
00:12:22.069        "data_size": 63488
00:12:22.069      },
00:12:22.069      {
00:12:22.069        "name": null,
00:12:22.069        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:22.069        "is_configured": false,
00:12:22.069        "data_offset": 2048,
00:12:22.069        "data_size": 63488
00:12:22.069      },
00:12:22.069      {
00:12:22.069        "name": null,
00:12:22.069        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:22.069        "is_configured": false,
00:12:22.069        "data_offset": 2048,
00:12:22.069        "data_size": 63488
00:12:22.070      }
00:12:22.070    ]
00:12:22.070  }'
00:12:22.070   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:22.070   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']'
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.636  [2024-12-16 11:33:48.538131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:12:22.636  [2024-12-16 11:33:48.538256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:22.636  [2024-12-16 11:33:48.538307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:12:22.636  [2024-12-16 11:33:48.538341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:22.636  [2024-12-16 11:33:48.538831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:22.636  [2024-12-16 11:33:48.538896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:12:22.636  [2024-12-16 11:33:48.539019] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:12:22.636  [2024-12-16 11:33:48.539083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:12:22.636  pt2
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.636  [2024-12-16 11:33:48.546142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:22.636    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:22.636    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:22.636    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.636    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.636    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:22.636   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:22.636    "name": "raid_bdev1",
00:12:22.637    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:22.637    "strip_size_kb": 0,
00:12:22.637    "state": "configuring",
00:12:22.637    "raid_level": "raid1",
00:12:22.637    "superblock": true,
00:12:22.637    "num_base_bdevs": 4,
00:12:22.637    "num_base_bdevs_discovered": 1,
00:12:22.637    "num_base_bdevs_operational": 4,
00:12:22.637    "base_bdevs_list": [
00:12:22.637      {
00:12:22.637        "name": "pt1",
00:12:22.637        "uuid": "00000000-0000-0000-0000-000000000001",
00:12:22.637        "is_configured": true,
00:12:22.637        "data_offset": 2048,
00:12:22.637        "data_size": 63488
00:12:22.637      },
00:12:22.637      {
00:12:22.637        "name": null,
00:12:22.637        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:22.637        "is_configured": false,
00:12:22.637        "data_offset": 0,
00:12:22.637        "data_size": 63488
00:12:22.637      },
00:12:22.637      {
00:12:22.637        "name": null,
00:12:22.637        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:22.637        "is_configured": false,
00:12:22.637        "data_offset": 2048,
00:12:22.637        "data_size": 63488
00:12:22.637      },
00:12:22.637      {
00:12:22.637        "name": null,
00:12:22.637        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:22.637        "is_configured": false,
00:12:22.637        "data_offset": 2048,
00:12:22.637        "data_size": 63488
00:12:22.637      }
00:12:22.637    ]
00:12:22.637  }'
00:12:22.637   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:22.637   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.895   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:12:22.895   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:12:22.895   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:12:22.895   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:22.895   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:22.895  [2024-12-16 11:33:48.959262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:12:22.895  [2024-12-16 11:33:48.959335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:22.895  [2024-12-16 11:33:48.959355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:12:22.895  [2024-12-16 11:33:48.959366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:22.895  [2024-12-16 11:33:48.959822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:23.154  [2024-12-16 11:33:48.959883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:12:23.154  [2024-12-16 11:33:48.959967] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:12:23.154  [2024-12-16 11:33:48.959993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:12:23.154  pt2
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.154  [2024-12-16 11:33:48.967197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:12:23.154  [2024-12-16 11:33:48.967303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:23.154  [2024-12-16 11:33:48.967324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:12:23.154  [2024-12-16 11:33:48.967335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:23.154  [2024-12-16 11:33:48.967681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:23.154  [2024-12-16 11:33:48.967701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:12:23.154  [2024-12-16 11:33:48.967759] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:12:23.154  [2024-12-16 11:33:48.967778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:12:23.154  pt3
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.154   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.155  [2024-12-16 11:33:48.975233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:12:23.155  [2024-12-16 11:33:48.975280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:23.155  [2024-12-16 11:33:48.975294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:12:23.155  [2024-12-16 11:33:48.975303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:23.155  [2024-12-16 11:33:48.975615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:23.155  [2024-12-16 11:33:48.975635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:12:23.155  [2024-12-16 11:33:48.975687] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:12:23.155  [2024-12-16 11:33:48.975705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:12:23.155  [2024-12-16 11:33:48.975802] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:12:23.155  [2024-12-16 11:33:48.975813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:23.155  [2024-12-16 11:33:48.976049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:12:23.155  [2024-12-16 11:33:48.976167] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:12:23.155  [2024-12-16 11:33:48.976177] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:12:23.155  [2024-12-16 11:33:48.976277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:23.155  pt4
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:23.155   11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:23.155    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:23.155    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.155    11:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.155    11:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:23.155    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.155   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:23.155    "name": "raid_bdev1",
00:12:23.155    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:23.155    "strip_size_kb": 0,
00:12:23.155    "state": "online",
00:12:23.155    "raid_level": "raid1",
00:12:23.155    "superblock": true,
00:12:23.155    "num_base_bdevs": 4,
00:12:23.155    "num_base_bdevs_discovered": 4,
00:12:23.155    "num_base_bdevs_operational": 4,
00:12:23.155    "base_bdevs_list": [
00:12:23.155      {
00:12:23.155        "name": "pt1",
00:12:23.155        "uuid": "00000000-0000-0000-0000-000000000001",
00:12:23.155        "is_configured": true,
00:12:23.155        "data_offset": 2048,
00:12:23.155        "data_size": 63488
00:12:23.155      },
00:12:23.155      {
00:12:23.155        "name": "pt2",
00:12:23.155        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:23.155        "is_configured": true,
00:12:23.155        "data_offset": 2048,
00:12:23.155        "data_size": 63488
00:12:23.155      },
00:12:23.155      {
00:12:23.155        "name": "pt3",
00:12:23.155        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:23.155        "is_configured": true,
00:12:23.155        "data_offset": 2048,
00:12:23.155        "data_size": 63488
00:12:23.155      },
00:12:23.155      {
00:12:23.155        "name": "pt4",
00:12:23.155        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:23.155        "is_configured": true,
00:12:23.155        "data_offset": 2048,
00:12:23.155        "data_size": 63488
00:12:23.155      }
00:12:23.155    ]
00:12:23.155  }'
00:12:23.155   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:23.155   11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.414   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:12:23.414   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:12:23.414   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:12:23.414   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:12:23.415   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:12:23.415   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:12:23.415    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:12:23.415    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:12:23.415    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.415    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.415  [2024-12-16 11:33:49.404755] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:23.415    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.415   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:12:23.415    "name": "raid_bdev1",
00:12:23.415    "aliases": [
00:12:23.415      "402d7850-9e58-47ef-b979-3c6b0f7ac20b"
00:12:23.415    ],
00:12:23.415    "product_name": "Raid Volume",
00:12:23.415    "block_size": 512,
00:12:23.415    "num_blocks": 63488,
00:12:23.415    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:23.415    "assigned_rate_limits": {
00:12:23.415      "rw_ios_per_sec": 0,
00:12:23.415      "rw_mbytes_per_sec": 0,
00:12:23.415      "r_mbytes_per_sec": 0,
00:12:23.415      "w_mbytes_per_sec": 0
00:12:23.415    },
00:12:23.415    "claimed": false,
00:12:23.415    "zoned": false,
00:12:23.415    "supported_io_types": {
00:12:23.415      "read": true,
00:12:23.415      "write": true,
00:12:23.415      "unmap": false,
00:12:23.415      "flush": false,
00:12:23.415      "reset": true,
00:12:23.415      "nvme_admin": false,
00:12:23.415      "nvme_io": false,
00:12:23.415      "nvme_io_md": false,
00:12:23.415      "write_zeroes": true,
00:12:23.415      "zcopy": false,
00:12:23.415      "get_zone_info": false,
00:12:23.415      "zone_management": false,
00:12:23.415      "zone_append": false,
00:12:23.415      "compare": false,
00:12:23.415      "compare_and_write": false,
00:12:23.415      "abort": false,
00:12:23.415      "seek_hole": false,
00:12:23.415      "seek_data": false,
00:12:23.415      "copy": false,
00:12:23.415      "nvme_iov_md": false
00:12:23.415    },
00:12:23.415    "memory_domains": [
00:12:23.415      {
00:12:23.415        "dma_device_id": "system",
00:12:23.415        "dma_device_type": 1
00:12:23.415      },
00:12:23.415      {
00:12:23.415        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:23.415        "dma_device_type": 2
00:12:23.415      },
00:12:23.415      {
00:12:23.415        "dma_device_id": "system",
00:12:23.415        "dma_device_type": 1
00:12:23.415      },
00:12:23.415      {
00:12:23.415        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:23.415        "dma_device_type": 2
00:12:23.415      },
00:12:23.415      {
00:12:23.415        "dma_device_id": "system",
00:12:23.415        "dma_device_type": 1
00:12:23.415      },
00:12:23.415      {
00:12:23.415        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:23.415        "dma_device_type": 2
00:12:23.415      },
00:12:23.415      {
00:12:23.415        "dma_device_id": "system",
00:12:23.415        "dma_device_type": 1
00:12:23.415      },
00:12:23.415      {
00:12:23.415        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:23.415        "dma_device_type": 2
00:12:23.415      }
00:12:23.415    ],
00:12:23.415    "driver_specific": {
00:12:23.415      "raid": {
00:12:23.415        "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:23.415        "strip_size_kb": 0,
00:12:23.415        "state": "online",
00:12:23.415        "raid_level": "raid1",
00:12:23.415        "superblock": true,
00:12:23.415        "num_base_bdevs": 4,
00:12:23.415        "num_base_bdevs_discovered": 4,
00:12:23.415        "num_base_bdevs_operational": 4,
00:12:23.415        "base_bdevs_list": [
00:12:23.415          {
00:12:23.415            "name": "pt1",
00:12:23.415            "uuid": "00000000-0000-0000-0000-000000000001",
00:12:23.415            "is_configured": true,
00:12:23.415            "data_offset": 2048,
00:12:23.415            "data_size": 63488
00:12:23.415          },
00:12:23.415          {
00:12:23.415            "name": "pt2",
00:12:23.415            "uuid": "00000000-0000-0000-0000-000000000002",
00:12:23.415            "is_configured": true,
00:12:23.415            "data_offset": 2048,
00:12:23.415            "data_size": 63488
00:12:23.415          },
00:12:23.415          {
00:12:23.415            "name": "pt3",
00:12:23.415            "uuid": "00000000-0000-0000-0000-000000000003",
00:12:23.415            "is_configured": true,
00:12:23.415            "data_offset": 2048,
00:12:23.415            "data_size": 63488
00:12:23.415          },
00:12:23.415          {
00:12:23.415            "name": "pt4",
00:12:23.415            "uuid": "00000000-0000-0000-0000-000000000004",
00:12:23.415            "is_configured": true,
00:12:23.415            "data_offset": 2048,
00:12:23.415            "data_size": 63488
00:12:23.415          }
00:12:23.415        ]
00:12:23.415      }
00:12:23.415    }
00:12:23.415  }'
00:12:23.415    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:12:23.674   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:12:23.674  pt2
00:12:23.674  pt3
00:12:23.674  pt4'
00:12:23.674    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:23.674   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:12:23.674   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:23.674    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:12:23.674    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:23.674    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.674    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.674    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.674   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:23.674   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:23.674   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:23.674    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:12:23.675   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:12:23.675  [2024-12-16 11:33:49.725646] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:23.675    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 402d7850-9e58-47ef-b979-3c6b0f7ac20b '!=' 402d7850-9e58-47ef-b979-3c6b0f7ac20b ']'
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.934  [2024-12-16 11:33:49.773442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:23.934    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:23.934    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.934    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:23.934    11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:23.934    11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:23.934    "name": "raid_bdev1",
00:12:23.934    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:23.934    "strip_size_kb": 0,
00:12:23.934    "state": "online",
00:12:23.934    "raid_level": "raid1",
00:12:23.934    "superblock": true,
00:12:23.934    "num_base_bdevs": 4,
00:12:23.934    "num_base_bdevs_discovered": 3,
00:12:23.934    "num_base_bdevs_operational": 3,
00:12:23.934    "base_bdevs_list": [
00:12:23.934      {
00:12:23.934        "name": null,
00:12:23.934        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:23.934        "is_configured": false,
00:12:23.934        "data_offset": 0,
00:12:23.934        "data_size": 63488
00:12:23.934      },
00:12:23.934      {
00:12:23.934        "name": "pt2",
00:12:23.934        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:23.934        "is_configured": true,
00:12:23.934        "data_offset": 2048,
00:12:23.934        "data_size": 63488
00:12:23.934      },
00:12:23.934      {
00:12:23.934        "name": "pt3",
00:12:23.934        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:23.934        "is_configured": true,
00:12:23.934        "data_offset": 2048,
00:12:23.934        "data_size": 63488
00:12:23.934      },
00:12:23.934      {
00:12:23.934        "name": "pt4",
00:12:23.934        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:23.934        "is_configured": true,
00:12:23.934        "data_offset": 2048,
00:12:23.934        "data_size": 63488
00:12:23.934      }
00:12:23.934    ]
00:12:23.934  }'
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:23.934   11:33:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.192   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:12:24.192   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.192   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.192  [2024-12-16 11:33:50.210514] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:24.192  [2024-12-16 11:33:50.210618] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:24.193  [2024-12-16 11:33:50.210735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:24.193  [2024-12-16 11:33:50.210835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:24.193  [2024-12-16 11:33:50.210881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:12:24.193   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.193    11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:24.193    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.193    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.193    11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:12:24.193    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.451   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.451  [2024-12-16 11:33:50.306757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:12:24.452  [2024-12-16 11:33:50.306829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:24.452  [2024-12-16 11:33:50.306849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:12:24.452  [2024-12-16 11:33:50.306861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:24.452  [2024-12-16 11:33:50.309166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:24.452  [2024-12-16 11:33:50.309245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:12:24.452  [2024-12-16 11:33:50.309347] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:12:24.452  [2024-12-16 11:33:50.309410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:12:24.452  pt2
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:24.452    11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:24.452    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.452    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.452    11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:24.452    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:24.452    "name": "raid_bdev1",
00:12:24.452    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:24.452    "strip_size_kb": 0,
00:12:24.452    "state": "configuring",
00:12:24.452    "raid_level": "raid1",
00:12:24.452    "superblock": true,
00:12:24.452    "num_base_bdevs": 4,
00:12:24.452    "num_base_bdevs_discovered": 1,
00:12:24.452    "num_base_bdevs_operational": 3,
00:12:24.452    "base_bdevs_list": [
00:12:24.452      {
00:12:24.452        "name": null,
00:12:24.452        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:24.452        "is_configured": false,
00:12:24.452        "data_offset": 2048,
00:12:24.452        "data_size": 63488
00:12:24.452      },
00:12:24.452      {
00:12:24.452        "name": "pt2",
00:12:24.452        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:24.452        "is_configured": true,
00:12:24.452        "data_offset": 2048,
00:12:24.452        "data_size": 63488
00:12:24.452      },
00:12:24.452      {
00:12:24.452        "name": null,
00:12:24.452        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:24.452        "is_configured": false,
00:12:24.452        "data_offset": 2048,
00:12:24.452        "data_size": 63488
00:12:24.452      },
00:12:24.452      {
00:12:24.452        "name": null,
00:12:24.452        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:24.452        "is_configured": false,
00:12:24.452        "data_offset": 2048,
00:12:24.452        "data_size": 63488
00:12:24.452      }
00:12:24.452    ]
00:12:24.452  }'
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:24.452   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ ))
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.710  [2024-12-16 11:33:50.695841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:12:24.710  [2024-12-16 11:33:50.695965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:24.710  [2024-12-16 11:33:50.696013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:12:24.710  [2024-12-16 11:33:50.696050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:24.710  [2024-12-16 11:33:50.696542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:24.710  [2024-12-16 11:33:50.696614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:12:24.710  [2024-12-16 11:33:50.696716] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:12:24.710  [2024-12-16 11:33:50.696769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:12:24.710  pt3
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:24.710   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:24.711    11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:24.711    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:24.711    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:24.711    11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:24.711    11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:24.711    "name": "raid_bdev1",
00:12:24.711    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:24.711    "strip_size_kb": 0,
00:12:24.711    "state": "configuring",
00:12:24.711    "raid_level": "raid1",
00:12:24.711    "superblock": true,
00:12:24.711    "num_base_bdevs": 4,
00:12:24.711    "num_base_bdevs_discovered": 2,
00:12:24.711    "num_base_bdevs_operational": 3,
00:12:24.711    "base_bdevs_list": [
00:12:24.711      {
00:12:24.711        "name": null,
00:12:24.711        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:24.711        "is_configured": false,
00:12:24.711        "data_offset": 2048,
00:12:24.711        "data_size": 63488
00:12:24.711      },
00:12:24.711      {
00:12:24.711        "name": "pt2",
00:12:24.711        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:24.711        "is_configured": true,
00:12:24.711        "data_offset": 2048,
00:12:24.711        "data_size": 63488
00:12:24.711      },
00:12:24.711      {
00:12:24.711        "name": "pt3",
00:12:24.711        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:24.711        "is_configured": true,
00:12:24.711        "data_offset": 2048,
00:12:24.711        "data_size": 63488
00:12:24.711      },
00:12:24.711      {
00:12:24.711        "name": null,
00:12:24.711        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:24.711        "is_configured": false,
00:12:24.711        "data_offset": 2048,
00:12:24.711        "data_size": 63488
00:12:24.711      }
00:12:24.711    ]
00:12:24.711  }'
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:24.711   11:33:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ ))
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.278  [2024-12-16 11:33:51.149011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:12:25.278  [2024-12-16 11:33:51.149092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:25.278  [2024-12-16 11:33:51.149115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:12:25.278  [2024-12-16 11:33:51.149127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:25.278  [2024-12-16 11:33:51.149550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:25.278  [2024-12-16 11:33:51.149571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:12:25.278  [2024-12-16 11:33:51.149660] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:12:25.278  [2024-12-16 11:33:51.149693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:12:25.278  [2024-12-16 11:33:51.149800] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:12:25.278  [2024-12-16 11:33:51.149817] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:25.278  [2024-12-16 11:33:51.150082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:12:25.278  [2024-12-16 11:33:51.150215] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:12:25.278  [2024-12-16 11:33:51.150225] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:12:25.278  [2024-12-16 11:33:51.150342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:25.278  pt4
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:25.278   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:25.279    11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:25.279    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:25.279    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.279    11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:25.279    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:25.279    "name": "raid_bdev1",
00:12:25.279    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:25.279    "strip_size_kb": 0,
00:12:25.279    "state": "online",
00:12:25.279    "raid_level": "raid1",
00:12:25.279    "superblock": true,
00:12:25.279    "num_base_bdevs": 4,
00:12:25.279    "num_base_bdevs_discovered": 3,
00:12:25.279    "num_base_bdevs_operational": 3,
00:12:25.279    "base_bdevs_list": [
00:12:25.279      {
00:12:25.279        "name": null,
00:12:25.279        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:25.279        "is_configured": false,
00:12:25.279        "data_offset": 2048,
00:12:25.279        "data_size": 63488
00:12:25.279      },
00:12:25.279      {
00:12:25.279        "name": "pt2",
00:12:25.279        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:25.279        "is_configured": true,
00:12:25.279        "data_offset": 2048,
00:12:25.279        "data_size": 63488
00:12:25.279      },
00:12:25.279      {
00:12:25.279        "name": "pt3",
00:12:25.279        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:25.279        "is_configured": true,
00:12:25.279        "data_offset": 2048,
00:12:25.279        "data_size": 63488
00:12:25.279      },
00:12:25.279      {
00:12:25.279        "name": "pt4",
00:12:25.279        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:25.279        "is_configured": true,
00:12:25.279        "data_offset": 2048,
00:12:25.279        "data_size": 63488
00:12:25.279      }
00:12:25.279    ]
00:12:25.279  }'
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:25.279   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.538   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:12:25.538   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:25.538   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.803  [2024-12-16 11:33:51.606268] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:25.803  [2024-12-16 11:33:51.606368] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:25.803  [2024-12-16 11:33:51.606504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:25.803  [2024-12-16 11:33:51.606637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:25.803  [2024-12-16 11:33:51.606693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']'
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.803  [2024-12-16 11:33:51.674472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:12:25.803  [2024-12-16 11:33:51.674562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:25.803  [2024-12-16 11:33:51.674592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080
00:12:25.803  [2024-12-16 11:33:51.674603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:25.803  [2024-12-16 11:33:51.677169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:25.803  [2024-12-16 11:33:51.677258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:12:25.803  [2024-12-16 11:33:51.677351] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:12:25.803  [2024-12-16 11:33:51.677399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:12:25.803  [2024-12-16 11:33:51.677533] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:12:25.803  [2024-12-16 11:33:51.677561] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:25.803  [2024-12-16 11:33:51.677580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:12:25.803  [2024-12-16 11:33:51.677622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:12:25.803  [2024-12-16 11:33:51.677739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:12:25.803  pt1
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']'
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:25.803    11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:25.803   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:25.803    "name": "raid_bdev1",
00:12:25.804    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:25.804    "strip_size_kb": 0,
00:12:25.804    "state": "configuring",
00:12:25.804    "raid_level": "raid1",
00:12:25.804    "superblock": true,
00:12:25.804    "num_base_bdevs": 4,
00:12:25.804    "num_base_bdevs_discovered": 2,
00:12:25.804    "num_base_bdevs_operational": 3,
00:12:25.804    "base_bdevs_list": [
00:12:25.804      {
00:12:25.804        "name": null,
00:12:25.804        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:25.804        "is_configured": false,
00:12:25.804        "data_offset": 2048,
00:12:25.804        "data_size": 63488
00:12:25.804      },
00:12:25.804      {
00:12:25.804        "name": "pt2",
00:12:25.804        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:25.804        "is_configured": true,
00:12:25.804        "data_offset": 2048,
00:12:25.804        "data_size": 63488
00:12:25.804      },
00:12:25.804      {
00:12:25.804        "name": "pt3",
00:12:25.804        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:25.804        "is_configured": true,
00:12:25.804        "data_offset": 2048,
00:12:25.804        "data_size": 63488
00:12:25.804      },
00:12:25.804      {
00:12:25.804        "name": null,
00:12:25.804        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:25.804        "is_configured": false,
00:12:25.804        "data_offset": 2048,
00:12:25.804        "data_size": 63488
00:12:25.804      }
00:12:25.804    ]
00:12:25.804  }'
00:12:25.804   11:33:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:25.804   11:33:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:26.075    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring
00:12:26.075    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:12:26.075    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:26.075    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:26.075    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:26.075   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]]
00:12:26.075   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:12:26.075   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:26.075   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:26.333  [2024-12-16 11:33:52.143683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:12:26.333  [2024-12-16 11:33:52.143794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:26.333  [2024-12-16 11:33:52.143818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:12:26.333  [2024-12-16 11:33:52.143830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:26.333  [2024-12-16 11:33:52.144246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:26.333  [2024-12-16 11:33:52.144274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:12:26.334  [2024-12-16 11:33:52.144346] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:12:26.334  [2024-12-16 11:33:52.144371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:12:26.334  [2024-12-16 11:33:52.144470] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:12:26.334  [2024-12-16 11:33:52.144489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:26.334  [2024-12-16 11:33:52.144726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:12:26.334  [2024-12-16 11:33:52.144841] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:12:26.334  [2024-12-16 11:33:52.144855] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:12:26.334  [2024-12-16 11:33:52.144961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:26.334  pt4
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:26.334    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:26.334    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:26.334    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:26.334    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:26.334    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:26.334    "name": "raid_bdev1",
00:12:26.334    "uuid": "402d7850-9e58-47ef-b979-3c6b0f7ac20b",
00:12:26.334    "strip_size_kb": 0,
00:12:26.334    "state": "online",
00:12:26.334    "raid_level": "raid1",
00:12:26.334    "superblock": true,
00:12:26.334    "num_base_bdevs": 4,
00:12:26.334    "num_base_bdevs_discovered": 3,
00:12:26.334    "num_base_bdevs_operational": 3,
00:12:26.334    "base_bdevs_list": [
00:12:26.334      {
00:12:26.334        "name": null,
00:12:26.334        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:26.334        "is_configured": false,
00:12:26.334        "data_offset": 2048,
00:12:26.334        "data_size": 63488
00:12:26.334      },
00:12:26.334      {
00:12:26.334        "name": "pt2",
00:12:26.334        "uuid": "00000000-0000-0000-0000-000000000002",
00:12:26.334        "is_configured": true,
00:12:26.334        "data_offset": 2048,
00:12:26.334        "data_size": 63488
00:12:26.334      },
00:12:26.334      {
00:12:26.334        "name": "pt3",
00:12:26.334        "uuid": "00000000-0000-0000-0000-000000000003",
00:12:26.334        "is_configured": true,
00:12:26.334        "data_offset": 2048,
00:12:26.334        "data_size": 63488
00:12:26.334      },
00:12:26.334      {
00:12:26.334        "name": "pt4",
00:12:26.334        "uuid": "00000000-0000-0000-0000-000000000004",
00:12:26.334        "is_configured": true,
00:12:26.334        "data_offset": 2048,
00:12:26.334        "data_size": 63488
00:12:26.334      }
00:12:26.334    ]
00:12:26.334  }'
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:26.334   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:12:26.592  [2024-12-16 11:33:52.573166] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 402d7850-9e58-47ef-b979-3c6b0f7ac20b '!=' 402d7850-9e58-47ef-b979-3c6b0f7ac20b ']'
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85617
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85617 ']'
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85617
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:12:26.592    11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85617
00:12:26.592  killing process with pid 85617
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85617'
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85617
00:12:26.592  [2024-12-16 11:33:52.653846] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:12:26.592  [2024-12-16 11:33:52.653936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:26.592   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85617
00:12:26.592  [2024-12-16 11:33:52.654037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:26.592  [2024-12-16 11:33:52.654046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:12:26.850  [2024-12-16 11:33:52.698789] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:12:27.108   11:33:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:12:27.108  
00:12:27.108  real	0m7.109s
00:12:27.108  user	0m11.944s
00:12:27.108  sys	0m1.503s
00:12:27.108  ************************************
00:12:27.108  END TEST raid_superblock_test
00:12:27.108  ************************************
00:12:27.108   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:12:27.108   11:33:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:12:27.108   11:33:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read
00:12:27.108   11:33:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:12:27.108   11:33:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:12:27.108   11:33:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:12:27.108  ************************************
00:12:27.108  START TEST raid_read_error_test
00:12:27.108  ************************************
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']'
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0
00:12:27.108    11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YT8uhoQPwV
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86093
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86093
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 86093 ']'
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:27.108  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:12:27.108   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:27.108  [2024-12-16 11:33:53.126047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:12:27.108  [2024-12-16 11:33:53.126273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86093 ]
00:12:27.366  [2024-12-16 11:33:53.287889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:27.366  [2024-12-16 11:33:53.334175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:12:27.366  [2024-12-16 11:33:53.377407] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:27.366  [2024-12-16 11:33:53.377514] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:27.931   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:12:28.189   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0
00:12:28.189   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:28.189   11:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:12:28.189   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.189   11:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.189  BaseBdev1_malloc
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.189  true
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.189  [2024-12-16 11:33:54.032189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:12:28.189  [2024-12-16 11:33:54.032246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:28.189  [2024-12-16 11:33:54.032278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:12:28.189  [2024-12-16 11:33:54.032296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:28.189  [2024-12-16 11:33:54.034649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:28.189  [2024-12-16 11:33:54.034743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:12:28.189  BaseBdev1
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.189   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  BaseBdev2_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  true
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  [2024-12-16 11:33:54.083423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:12:28.190  [2024-12-16 11:33:54.083518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:28.190  [2024-12-16 11:33:54.083556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:12:28.190  [2024-12-16 11:33:54.083565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:28.190  [2024-12-16 11:33:54.085717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:28.190  [2024-12-16 11:33:54.085751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:12:28.190  BaseBdev2
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  BaseBdev3_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  true
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  [2024-12-16 11:33:54.124102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:12:28.190  [2024-12-16 11:33:54.124150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:28.190  [2024-12-16 11:33:54.124170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:12:28.190  [2024-12-16 11:33:54.124178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:28.190  [2024-12-16 11:33:54.126241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:28.190  [2024-12-16 11:33:54.126278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:12:28.190  BaseBdev3
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  BaseBdev4_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  true
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  [2024-12-16 11:33:54.164834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc
00:12:28.190  [2024-12-16 11:33:54.164881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:28.190  [2024-12-16 11:33:54.164903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:12:28.190  [2024-12-16 11:33:54.164911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:28.190  [2024-12-16 11:33:54.167123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:28.190  [2024-12-16 11:33:54.167159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:12:28.190  BaseBdev4
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190  [2024-12-16 11:33:54.176853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:28.190  [2024-12-16 11:33:54.178824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:28.190  [2024-12-16 11:33:54.178982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:28.190  [2024-12-16 11:33:54.179050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:12:28.190  [2024-12-16 11:33:54.179285] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080
00:12:28.190  [2024-12-16 11:33:54.179299] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:28.190  [2024-12-16 11:33:54.179596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:12:28.190  [2024-12-16 11:33:54.179741] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080
00:12:28.190  [2024-12-16 11:33:54.179755] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080
00:12:28.190  [2024-12-16 11:33:54.179905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:28.190    11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:28.190    11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:28.190    11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:28.190    11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.190    11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:28.190    "name": "raid_bdev1",
00:12:28.190    "uuid": "130f5c8a-4385-4797-865a-135ee2680234",
00:12:28.190    "strip_size_kb": 0,
00:12:28.190    "state": "online",
00:12:28.190    "raid_level": "raid1",
00:12:28.190    "superblock": true,
00:12:28.190    "num_base_bdevs": 4,
00:12:28.190    "num_base_bdevs_discovered": 4,
00:12:28.190    "num_base_bdevs_operational": 4,
00:12:28.190    "base_bdevs_list": [
00:12:28.190      {
00:12:28.190        "name": "BaseBdev1",
00:12:28.190        "uuid": "49266338-3c3b-5ebf-ba85-e56bebbe82be",
00:12:28.190        "is_configured": true,
00:12:28.190        "data_offset": 2048,
00:12:28.190        "data_size": 63488
00:12:28.190      },
00:12:28.190      {
00:12:28.190        "name": "BaseBdev2",
00:12:28.190        "uuid": "fd17e5c3-5928-54d7-a898-adb88e054a86",
00:12:28.190        "is_configured": true,
00:12:28.190        "data_offset": 2048,
00:12:28.190        "data_size": 63488
00:12:28.190      },
00:12:28.190      {
00:12:28.190        "name": "BaseBdev3",
00:12:28.190        "uuid": "36215605-b885-5e15-b80e-d646375caada",
00:12:28.190        "is_configured": true,
00:12:28.190        "data_offset": 2048,
00:12:28.190        "data_size": 63488
00:12:28.190      },
00:12:28.190      {
00:12:28.190        "name": "BaseBdev4",
00:12:28.190        "uuid": "ed3138b2-9393-5beb-abd1-69206f4dec10",
00:12:28.190        "is_configured": true,
00:12:28.190        "data_offset": 2048,
00:12:28.190        "data_size": 63488
00:12:28.190      }
00:12:28.190    ]
00:12:28.190  }'
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:28.190   11:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:28.757   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:12:28.757   11:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:12:28.757  [2024-12-16 11:33:54.724419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]]
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]]
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:29.690    11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:29.690    11:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:29.690    11:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:29.690    11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:29.690    11:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:29.690    "name": "raid_bdev1",
00:12:29.690    "uuid": "130f5c8a-4385-4797-865a-135ee2680234",
00:12:29.690    "strip_size_kb": 0,
00:12:29.690    "state": "online",
00:12:29.690    "raid_level": "raid1",
00:12:29.690    "superblock": true,
00:12:29.690    "num_base_bdevs": 4,
00:12:29.690    "num_base_bdevs_discovered": 4,
00:12:29.690    "num_base_bdevs_operational": 4,
00:12:29.690    "base_bdevs_list": [
00:12:29.690      {
00:12:29.690        "name": "BaseBdev1",
00:12:29.690        "uuid": "49266338-3c3b-5ebf-ba85-e56bebbe82be",
00:12:29.690        "is_configured": true,
00:12:29.690        "data_offset": 2048,
00:12:29.690        "data_size": 63488
00:12:29.690      },
00:12:29.690      {
00:12:29.690        "name": "BaseBdev2",
00:12:29.690        "uuid": "fd17e5c3-5928-54d7-a898-adb88e054a86",
00:12:29.690        "is_configured": true,
00:12:29.690        "data_offset": 2048,
00:12:29.690        "data_size": 63488
00:12:29.690      },
00:12:29.690      {
00:12:29.690        "name": "BaseBdev3",
00:12:29.690        "uuid": "36215605-b885-5e15-b80e-d646375caada",
00:12:29.690        "is_configured": true,
00:12:29.690        "data_offset": 2048,
00:12:29.690        "data_size": 63488
00:12:29.690      },
00:12:29.690      {
00:12:29.690        "name": "BaseBdev4",
00:12:29.690        "uuid": "ed3138b2-9393-5beb-abd1-69206f4dec10",
00:12:29.690        "is_configured": true,
00:12:29.690        "data_offset": 2048,
00:12:29.690        "data_size": 63488
00:12:29.690      }
00:12:29.690    ]
00:12:29.690  }'
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:29.690   11:33:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:30.255   11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:12:30.255   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:30.255   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:30.255  [2024-12-16 11:33:56.131948] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:30.255  [2024-12-16 11:33:56.131984] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:30.255  [2024-12-16 11:33:56.134672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:30.255  [2024-12-16 11:33:56.134724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:30.255  [2024-12-16 11:33:56.134847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:30.255  [2024-12-16 11:33:56.134857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline
00:12:30.255  {
00:12:30.255    "results": [
00:12:30.255      {
00:12:30.255        "job": "raid_bdev1",
00:12:30.255        "core_mask": "0x1",
00:12:30.255        "workload": "randrw",
00:12:30.255        "percentage": 50,
00:12:30.255        "status": "finished",
00:12:30.255        "queue_depth": 1,
00:12:30.255        "io_size": 131072,
00:12:30.255        "runtime": 1.40815,
00:12:30.255        "iops": 10728.260483613252,
00:12:30.255        "mibps": 1341.0325604516565,
00:12:30.255        "io_failed": 0,
00:12:30.255        "io_timeout": 0,
00:12:30.255        "avg_latency_us": 90.44364488193824,
00:12:30.255        "min_latency_us": 23.923144104803495,
00:12:30.255        "max_latency_us": 1645.5545851528384
00:12:30.255      }
00:12:30.255    ],
00:12:30.255    "core_count": 1
00:12:30.255  }
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86093
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 86093 ']'
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 86093
00:12:30.256    11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:12:30.256    11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86093
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86093'
00:12:30.256  killing process with pid 86093
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 86093
00:12:30.256   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 86093
00:12:30.256  [2024-12-16 11:33:56.176586] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:12:30.256  [2024-12-16 11:33:56.213952] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:12:30.515    11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YT8uhoQPwV
00:12:30.515    11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:12:30.515    11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:12:30.515  ************************************
00:12:30.515  END TEST raid_read_error_test
00:12:30.515  ************************************
00:12:30.515   11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00
00:12:30.515   11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1
00:12:30.515   11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:12:30.515   11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0
00:12:30.515   11:33:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]]
00:12:30.515  
00:12:30.515  real	0m3.443s
00:12:30.515  user	0m4.368s
00:12:30.515  sys	0m0.568s
00:12:30.515   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:12:30.515   11:33:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:30.515   11:33:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write
00:12:30.515   11:33:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:12:30.515   11:33:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:12:30.515   11:33:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:12:30.515  ************************************
00:12:30.515  START TEST raid_write_error_test
00:12:30.515  ************************************
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ ))
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs ))
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']'
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0
00:12:30.515    11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Pmhg7JhR50
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86222
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86222
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86222 ']'
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:12:30.515  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:30.515   11:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid
00:12:30.774  [2024-12-16 11:33:56.621944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:12:30.774  [2024-12-16 11:33:56.622099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86222 ]
00:12:30.774  [2024-12-16 11:33:56.782360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:30.774  [2024-12-16 11:33:56.833852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:12:31.032  [2024-12-16 11:33:56.876532] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:31.032  [2024-12-16 11:33:56.876578] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  BaseBdev1_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  true
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  [2024-12-16 11:33:57.538965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc
00:12:31.644  [2024-12-16 11:33:57.539022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:31.644  [2024-12-16 11:33:57.539042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580
00:12:31.644  [2024-12-16 11:33:57.539059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:31.644  [2024-12-16 11:33:57.541326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:31.644  [2024-12-16 11:33:57.541366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:12:31.644  BaseBdev1
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  BaseBdev2_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  true
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  [2024-12-16 11:33:57.589447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc
00:12:31.644  [2024-12-16 11:33:57.589620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:31.644  [2024-12-16 11:33:57.589647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:12:31.644  [2024-12-16 11:33:57.589657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:31.644  [2024-12-16 11:33:57.591926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:31.644  [2024-12-16 11:33:57.591963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:12:31.644  BaseBdev2
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  BaseBdev3_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  true
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  [2024-12-16 11:33:57.630671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc
00:12:31.644  [2024-12-16 11:33:57.630748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:31.644  [2024-12-16 11:33:57.630776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:12:31.644  [2024-12-16 11:33:57.630786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:31.644  [2024-12-16 11:33:57.633182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:31.644  [2024-12-16 11:33:57.633314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:12:31.644  BaseBdev3
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}"
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  BaseBdev4_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.644  true
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.644   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.645  [2024-12-16 11:33:57.671886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc
00:12:31.645  [2024-12-16 11:33:57.671993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:31.645  [2024-12-16 11:33:57.672024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:12:31.645  [2024-12-16 11:33:57.672034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:31.645  [2024-12-16 11:33:57.674386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:31.645  [2024-12-16 11:33:57.674426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:12:31.645  BaseBdev4
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.645  [2024-12-16 11:33:57.683907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:31.645  [2024-12-16 11:33:57.685816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:31.645  [2024-12-16 11:33:57.685905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:12:31.645  [2024-12-16 11:33:57.685960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:12:31.645  [2024-12-16 11:33:57.686168] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080
00:12:31.645  [2024-12-16 11:33:57.686180] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:31.645  [2024-12-16 11:33:57.686435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:12:31.645  [2024-12-16 11:33:57.686586] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080
00:12:31.645  [2024-12-16 11:33:57.686599] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080
00:12:31.645  [2024-12-16 11:33:57.686734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:31.645   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:31.905   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:31.905   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:31.905    11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:31.905    11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:31.905    11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:31.905    11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:31.905    11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:31.905   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:31.905    "name": "raid_bdev1",
00:12:31.905    "uuid": "ea732f5b-18de-4035-9f3a-099c6719f586",
00:12:31.905    "strip_size_kb": 0,
00:12:31.905    "state": "online",
00:12:31.905    "raid_level": "raid1",
00:12:31.905    "superblock": true,
00:12:31.905    "num_base_bdevs": 4,
00:12:31.905    "num_base_bdevs_discovered": 4,
00:12:31.905    "num_base_bdevs_operational": 4,
00:12:31.905    "base_bdevs_list": [
00:12:31.905      {
00:12:31.905        "name": "BaseBdev1",
00:12:31.905        "uuid": "416a2912-c8ae-5a1b-b5e6-8609bf6b52ae",
00:12:31.905        "is_configured": true,
00:12:31.905        "data_offset": 2048,
00:12:31.905        "data_size": 63488
00:12:31.905      },
00:12:31.905      {
00:12:31.905        "name": "BaseBdev2",
00:12:31.905        "uuid": "ba985137-ddf3-5446-8bc5-ae091b2a8c59",
00:12:31.905        "is_configured": true,
00:12:31.905        "data_offset": 2048,
00:12:31.905        "data_size": 63488
00:12:31.905      },
00:12:31.905      {
00:12:31.905        "name": "BaseBdev3",
00:12:31.905        "uuid": "f3aaaf3e-a5c1-5c08-b018-f6e06d8f3a3b",
00:12:31.905        "is_configured": true,
00:12:31.905        "data_offset": 2048,
00:12:31.905        "data_size": 63488
00:12:31.905      },
00:12:31.905      {
00:12:31.905        "name": "BaseBdev4",
00:12:31.905        "uuid": "9439c38d-4191-556d-892a-eed49f6ed631",
00:12:31.905        "is_configured": true,
00:12:31.905        "data_offset": 2048,
00:12:31.905        "data_size": 63488
00:12:31.905      }
00:12:31.905    ]
00:12:31.905  }'
00:12:31.905   11:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:31.905   11:33:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:32.164   11:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1
00:12:32.164   11:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:12:32.422  [2024-12-16 11:33:58.247383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:12:33.356   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure
00:12:33.356   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:33.356   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:33.356  [2024-12-16 11:33:59.157749] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1'
00:12:33.357  [2024-12-16 11:33:59.157803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:12:33.357  [2024-12-16 11:33:59.158041] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]]
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]]
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:33.357    11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:33.357    11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:33.357    11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:33.357    11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:33.357    11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:33.357    "name": "raid_bdev1",
00:12:33.357    "uuid": "ea732f5b-18de-4035-9f3a-099c6719f586",
00:12:33.357    "strip_size_kb": 0,
00:12:33.357    "state": "online",
00:12:33.357    "raid_level": "raid1",
00:12:33.357    "superblock": true,
00:12:33.357    "num_base_bdevs": 4,
00:12:33.357    "num_base_bdevs_discovered": 3,
00:12:33.357    "num_base_bdevs_operational": 3,
00:12:33.357    "base_bdevs_list": [
00:12:33.357      {
00:12:33.357        "name": null,
00:12:33.357        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:33.357        "is_configured": false,
00:12:33.357        "data_offset": 0,
00:12:33.357        "data_size": 63488
00:12:33.357      },
00:12:33.357      {
00:12:33.357        "name": "BaseBdev2",
00:12:33.357        "uuid": "ba985137-ddf3-5446-8bc5-ae091b2a8c59",
00:12:33.357        "is_configured": true,
00:12:33.357        "data_offset": 2048,
00:12:33.357        "data_size": 63488
00:12:33.357      },
00:12:33.357      {
00:12:33.357        "name": "BaseBdev3",
00:12:33.357        "uuid": "f3aaaf3e-a5c1-5c08-b018-f6e06d8f3a3b",
00:12:33.357        "is_configured": true,
00:12:33.357        "data_offset": 2048,
00:12:33.357        "data_size": 63488
00:12:33.357      },
00:12:33.357      {
00:12:33.357        "name": "BaseBdev4",
00:12:33.357        "uuid": "9439c38d-4191-556d-892a-eed49f6ed631",
00:12:33.357        "is_configured": true,
00:12:33.357        "data_offset": 2048,
00:12:33.357        "data_size": 63488
00:12:33.357      }
00:12:33.357    ]
00:12:33.357  }'
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:33.357   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:33.613   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:12:33.613   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:33.613   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:33.613  [2024-12-16 11:33:59.629763] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:33.614  [2024-12-16 11:33:59.629875] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:33.614  [2024-12-16 11:33:59.632493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:33.614  [2024-12-16 11:33:59.632601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:33.614  [2024-12-16 11:33:59.632737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:33.614  [2024-12-16 11:33:59.632786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline
00:12:33.614  {
00:12:33.614    "results": [
00:12:33.614      {
00:12:33.614        "job": "raid_bdev1",
00:12:33.614        "core_mask": "0x1",
00:12:33.614        "workload": "randrw",
00:12:33.614        "percentage": 50,
00:12:33.614        "status": "finished",
00:12:33.614        "queue_depth": 1,
00:12:33.614        "io_size": 131072,
00:12:33.614        "runtime": 1.383143,
00:12:33.614        "iops": 11942.366046027057,
00:12:33.614        "mibps": 1492.795755753382,
00:12:33.614        "io_failed": 0,
00:12:33.614        "io_timeout": 0,
00:12:33.614        "avg_latency_us": 80.97205969827279,
00:12:33.614        "min_latency_us": 23.699563318777294,
00:12:33.614        "max_latency_us": 1473.844541484716
00:12:33.614      }
00:12:33.614    ],
00:12:33.614    "core_count": 1
00:12:33.614  }
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86222
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86222 ']'
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86222
00:12:33.614    11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:12:33.614    11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86222
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86222'
00:12:33.614  killing process with pid 86222
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86222
00:12:33.614  [2024-12-16 11:33:59.677307] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:12:33.614   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86222
00:12:33.871  [2024-12-16 11:33:59.712398] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:12:34.130    11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Pmhg7JhR50
00:12:34.130    11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1
00:12:34.130    11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}'
00:12:34.130   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00
00:12:34.130   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1
00:12:34.130   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:12:34.130   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0
00:12:34.130   11:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]]
00:12:34.130  
00:12:34.130  real	0m3.441s
00:12:34.130  user	0m4.399s
00:12:34.130  sys	0m0.561s
00:12:34.130   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:12:34.130   11:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x
00:12:34.130  ************************************
00:12:34.130  END TEST raid_write_error_test
00:12:34.130  ************************************
00:12:34.130   11:34:00 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']'
00:12:34.130   11:34:00 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4
00:12:34.130   11:34:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true
00:12:34.130   11:34:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:12:34.130   11:34:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:12:34.130   11:34:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:12:34.130  ************************************
00:12:34.130  START TEST raid_rebuild_test
00:12:34.130  ************************************
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:12:34.130    11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']'
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86349
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86349
00:12:34.130   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86349 ']'
00:12:34.131   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:34.131   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:12:34.131   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:34.131  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:34.131   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:12:34.131   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:34.131  [2024-12-16 11:34:00.129959] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:12:34.131  [2024-12-16 11:34:00.130170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536).
00:12:34.131  Zero copy mechanism will not be used.
00:12:34.131  :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86349 ]
00:12:34.388  [2024-12-16 11:34:00.292354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:34.388  [2024-12-16 11:34:00.341952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:12:34.388  [2024-12-16 11:34:00.385206] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:34.388  [2024-12-16 11:34:00.385309] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:34.954  BaseBdev1_malloc
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:34.954  [2024-12-16 11:34:00.988119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:12:34.954  [2024-12-16 11:34:00.988265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:34.954  [2024-12-16 11:34:00.988300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:12:34.954  [2024-12-16 11:34:00.988316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:34.954  [2024-12-16 11:34:00.990771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:34.954  [2024-12-16 11:34:00.990808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:12:34.954  BaseBdev1
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:34.954   11:34:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.212  BaseBdev2_malloc
00:12:35.212   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.212   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:12:35.212   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.212   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.212  [2024-12-16 11:34:01.026896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:12:35.212  [2024-12-16 11:34:01.026954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:35.212  [2024-12-16 11:34:01.026975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:12:35.213  [2024-12-16 11:34:01.026984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:35.213  [2024-12-16 11:34:01.029149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:35.213  [2024-12-16 11:34:01.029187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:12:35.213  BaseBdev2
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.213  spare_malloc
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.213  spare_delay
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.213  [2024-12-16 11:34:01.067526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:12:35.213  [2024-12-16 11:34:01.067596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:35.213  [2024-12-16 11:34:01.067620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:12:35.213  [2024-12-16 11:34:01.067629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:35.213  [2024-12-16 11:34:01.069804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:35.213  [2024-12-16 11:34:01.069885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:12:35.213  spare
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.213  [2024-12-16 11:34:01.079546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:35.213  [2024-12-16 11:34:01.081389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:35.213  [2024-12-16 11:34:01.081475] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:12:35.213  [2024-12-16 11:34:01.081487] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:12:35.213  [2024-12-16 11:34:01.081751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:12:35.213  [2024-12-16 11:34:01.081867] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:12:35.213  [2024-12-16 11:34:01.081885] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:12:35.213  [2024-12-16 11:34:01.082015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:35.213    11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:35.213    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.213    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.213    11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:35.213    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:35.213    "name": "raid_bdev1",
00:12:35.213    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:35.213    "strip_size_kb": 0,
00:12:35.213    "state": "online",
00:12:35.213    "raid_level": "raid1",
00:12:35.213    "superblock": false,
00:12:35.213    "num_base_bdevs": 2,
00:12:35.213    "num_base_bdevs_discovered": 2,
00:12:35.213    "num_base_bdevs_operational": 2,
00:12:35.213    "base_bdevs_list": [
00:12:35.213      {
00:12:35.213        "name": "BaseBdev1",
00:12:35.213        "uuid": "38959039-32d8-5411-94ce-bc5851113adf",
00:12:35.213        "is_configured": true,
00:12:35.213        "data_offset": 0,
00:12:35.213        "data_size": 65536
00:12:35.213      },
00:12:35.213      {
00:12:35.213        "name": "BaseBdev2",
00:12:35.213        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:35.213        "is_configured": true,
00:12:35.213        "data_offset": 0,
00:12:35.213        "data_size": 65536
00:12:35.213      }
00:12:35.213    ]
00:12:35.213  }'
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:35.213   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:12:35.779  [2024-12-16 11:34:01.551137] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:35.779    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:12:35.779   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:12:35.779  [2024-12-16 11:34:01.822419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:12:35.779  /dev/nbd0
00:12:36.038    11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:36.038  1+0 records in
00:12:36.038  1+0 records out
00:12:36.038  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426764 s, 9.6 MB/s
00:12:36.038    11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']'
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1
00:12:36.038   11:34:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct
00:12:40.226  65536+0 records in
00:12:40.226  65536+0 records out
00:12:40.226  33554432 bytes (34 MB, 32 MiB) copied, 3.96241 s, 8.5 MB/s
00:12:40.226   11:34:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:12:40.226   11:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:12:40.226   11:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:12:40.226   11:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:12:40.226   11:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:12:40.226   11:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:40.226   11:34:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:12:40.226    11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:12:40.226  [2024-12-16 11:34:06.074395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:40.226  [2024-12-16 11:34:06.090492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:40.226    11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:40.226    11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:40.226    11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:40.226    11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:40.226    11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:40.226    "name": "raid_bdev1",
00:12:40.226    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:40.226    "strip_size_kb": 0,
00:12:40.226    "state": "online",
00:12:40.226    "raid_level": "raid1",
00:12:40.226    "superblock": false,
00:12:40.226    "num_base_bdevs": 2,
00:12:40.226    "num_base_bdevs_discovered": 1,
00:12:40.226    "num_base_bdevs_operational": 1,
00:12:40.226    "base_bdevs_list": [
00:12:40.226      {
00:12:40.226        "name": null,
00:12:40.226        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:40.226        "is_configured": false,
00:12:40.226        "data_offset": 0,
00:12:40.226        "data_size": 65536
00:12:40.226      },
00:12:40.226      {
00:12:40.226        "name": "BaseBdev2",
00:12:40.226        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:40.226        "is_configured": true,
00:12:40.226        "data_offset": 0,
00:12:40.226        "data_size": 65536
00:12:40.226      }
00:12:40.226    ]
00:12:40.226  }'
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:40.226   11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:40.794   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:12:40.794   11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:40.794   11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:40.794  [2024-12-16 11:34:06.565708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:12:40.794  [2024-12-16 11:34:06.570118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30
00:12:40.794   11:34:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:40.794   11:34:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1
00:12:40.794  [2024-12-16 11:34:06.572276] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:41.731    11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:41.731    11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:41.731    11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:41.731    11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:41.731    11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:41.731    "name": "raid_bdev1",
00:12:41.731    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:41.731    "strip_size_kb": 0,
00:12:41.731    "state": "online",
00:12:41.731    "raid_level": "raid1",
00:12:41.731    "superblock": false,
00:12:41.731    "num_base_bdevs": 2,
00:12:41.731    "num_base_bdevs_discovered": 2,
00:12:41.731    "num_base_bdevs_operational": 2,
00:12:41.731    "process": {
00:12:41.731      "type": "rebuild",
00:12:41.731      "target": "spare",
00:12:41.731      "progress": {
00:12:41.731        "blocks": 20480,
00:12:41.731        "percent": 31
00:12:41.731      }
00:12:41.731    },
00:12:41.731    "base_bdevs_list": [
00:12:41.731      {
00:12:41.731        "name": "spare",
00:12:41.731        "uuid": "0c699582-a489-57d3-980b-ee7a6bf9ea27",
00:12:41.731        "is_configured": true,
00:12:41.731        "data_offset": 0,
00:12:41.731        "data_size": 65536
00:12:41.731      },
00:12:41.731      {
00:12:41.731        "name": "BaseBdev2",
00:12:41.731        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:41.731        "is_configured": true,
00:12:41.731        "data_offset": 0,
00:12:41.731        "data_size": 65536
00:12:41.731      }
00:12:41.731    ]
00:12:41.731  }'
00:12:41.731    11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:41.731    11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:41.731  [2024-12-16 11:34:07.732647] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:12:41.731  [2024-12-16 11:34:07.777717] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:12:41.731  [2024-12-16 11:34:07.777824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:41.731  [2024-12-16 11:34:07.777864] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:12:41.731  [2024-12-16 11:34:07.777886] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:41.731   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:41.732   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:12:41.732   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:41.732   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:41.732   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:41.732   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:41.732    11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:41.732    11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:41.732    11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:41.732    11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:41.991    11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:41.991   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:41.991    "name": "raid_bdev1",
00:12:41.991    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:41.991    "strip_size_kb": 0,
00:12:41.991    "state": "online",
00:12:41.991    "raid_level": "raid1",
00:12:41.991    "superblock": false,
00:12:41.991    "num_base_bdevs": 2,
00:12:41.991    "num_base_bdevs_discovered": 1,
00:12:41.991    "num_base_bdevs_operational": 1,
00:12:41.991    "base_bdevs_list": [
00:12:41.991      {
00:12:41.991        "name": null,
00:12:41.991        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:41.991        "is_configured": false,
00:12:41.991        "data_offset": 0,
00:12:41.991        "data_size": 65536
00:12:41.991      },
00:12:41.991      {
00:12:41.991        "name": "BaseBdev2",
00:12:41.991        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:41.991        "is_configured": true,
00:12:41.991        "data_offset": 0,
00:12:41.991        "data_size": 65536
00:12:41.991      }
00:12:41.991    ]
00:12:41.991  }'
00:12:41.991   11:34:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:41.991   11:34:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:42.249   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:12:42.249   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:42.249   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:12:42.249   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:12:42.249   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:42.249    11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:42.249    11:34:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:42.249    11:34:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:42.249    11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:42.249    11:34:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:42.249   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:42.249    "name": "raid_bdev1",
00:12:42.249    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:42.249    "strip_size_kb": 0,
00:12:42.249    "state": "online",
00:12:42.249    "raid_level": "raid1",
00:12:42.249    "superblock": false,
00:12:42.249    "num_base_bdevs": 2,
00:12:42.249    "num_base_bdevs_discovered": 1,
00:12:42.249    "num_base_bdevs_operational": 1,
00:12:42.249    "base_bdevs_list": [
00:12:42.249      {
00:12:42.249        "name": null,
00:12:42.249        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:42.249        "is_configured": false,
00:12:42.249        "data_offset": 0,
00:12:42.249        "data_size": 65536
00:12:42.249      },
00:12:42.249      {
00:12:42.249        "name": "BaseBdev2",
00:12:42.249        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:42.249        "is_configured": true,
00:12:42.249        "data_offset": 0,
00:12:42.249        "data_size": 65536
00:12:42.249      }
00:12:42.249    ]
00:12:42.249  }'
00:12:42.249    11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:42.507   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:12:42.507    11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:42.507   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:12:42.507   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:12:42.507   11:34:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:42.507   11:34:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:42.507  [2024-12-16 11:34:08.381435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:12:42.507  [2024-12-16 11:34:08.385859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00
00:12:42.507   11:34:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:42.507   11:34:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1
00:12:42.507  [2024-12-16 11:34:08.388003] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:12:43.442   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:43.442   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:43.442   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:43.442   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:43.442   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:43.442    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:43.442    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:43.442    11:34:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:43.442    11:34:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:43.443    11:34:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:43.443   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:43.443    "name": "raid_bdev1",
00:12:43.443    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:43.443    "strip_size_kb": 0,
00:12:43.443    "state": "online",
00:12:43.443    "raid_level": "raid1",
00:12:43.443    "superblock": false,
00:12:43.443    "num_base_bdevs": 2,
00:12:43.443    "num_base_bdevs_discovered": 2,
00:12:43.443    "num_base_bdevs_operational": 2,
00:12:43.443    "process": {
00:12:43.443      "type": "rebuild",
00:12:43.443      "target": "spare",
00:12:43.443      "progress": {
00:12:43.443        "blocks": 20480,
00:12:43.443        "percent": 31
00:12:43.443      }
00:12:43.443    },
00:12:43.443    "base_bdevs_list": [
00:12:43.443      {
00:12:43.443        "name": "spare",
00:12:43.443        "uuid": "0c699582-a489-57d3-980b-ee7a6bf9ea27",
00:12:43.443        "is_configured": true,
00:12:43.443        "data_offset": 0,
00:12:43.443        "data_size": 65536
00:12:43.443      },
00:12:43.443      {
00:12:43.443        "name": "BaseBdev2",
00:12:43.443        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:43.443        "is_configured": true,
00:12:43.443        "data_offset": 0,
00:12:43.443        "data_size": 65536
00:12:43.443      }
00:12:43.443    ]
00:12:43.443  }'
00:12:43.443    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:43.443   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:43.443    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']'
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']'
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=302
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:43.702    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:43.702    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:43.702    11:34:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:43.702    11:34:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:43.702    11:34:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:43.702   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:43.702    "name": "raid_bdev1",
00:12:43.702    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:43.702    "strip_size_kb": 0,
00:12:43.702    "state": "online",
00:12:43.702    "raid_level": "raid1",
00:12:43.702    "superblock": false,
00:12:43.702    "num_base_bdevs": 2,
00:12:43.702    "num_base_bdevs_discovered": 2,
00:12:43.702    "num_base_bdevs_operational": 2,
00:12:43.703    "process": {
00:12:43.703      "type": "rebuild",
00:12:43.703      "target": "spare",
00:12:43.703      "progress": {
00:12:43.703        "blocks": 22528,
00:12:43.703        "percent": 34
00:12:43.703      }
00:12:43.703    },
00:12:43.703    "base_bdevs_list": [
00:12:43.703      {
00:12:43.703        "name": "spare",
00:12:43.703        "uuid": "0c699582-a489-57d3-980b-ee7a6bf9ea27",
00:12:43.703        "is_configured": true,
00:12:43.703        "data_offset": 0,
00:12:43.703        "data_size": 65536
00:12:43.703      },
00:12:43.703      {
00:12:43.703        "name": "BaseBdev2",
00:12:43.703        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:43.703        "is_configured": true,
00:12:43.703        "data_offset": 0,
00:12:43.703        "data_size": 65536
00:12:43.703      }
00:12:43.703    ]
00:12:43.703  }'
00:12:43.703    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:43.703   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:43.703    11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:43.703   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:43.703   11:34:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:12:44.641   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:12:44.641   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:44.641   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:44.641   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:44.641   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:44.641   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:44.641    11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:44.641    11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:44.641    11:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:44.641    11:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:44.901    11:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:44.901   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:44.901    "name": "raid_bdev1",
00:12:44.901    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:44.901    "strip_size_kb": 0,
00:12:44.901    "state": "online",
00:12:44.901    "raid_level": "raid1",
00:12:44.901    "superblock": false,
00:12:44.901    "num_base_bdevs": 2,
00:12:44.901    "num_base_bdevs_discovered": 2,
00:12:44.901    "num_base_bdevs_operational": 2,
00:12:44.901    "process": {
00:12:44.901      "type": "rebuild",
00:12:44.901      "target": "spare",
00:12:44.901      "progress": {
00:12:44.901        "blocks": 47104,
00:12:44.901        "percent": 71
00:12:44.901      }
00:12:44.901    },
00:12:44.901    "base_bdevs_list": [
00:12:44.901      {
00:12:44.901        "name": "spare",
00:12:44.901        "uuid": "0c699582-a489-57d3-980b-ee7a6bf9ea27",
00:12:44.901        "is_configured": true,
00:12:44.901        "data_offset": 0,
00:12:44.901        "data_size": 65536
00:12:44.901      },
00:12:44.901      {
00:12:44.901        "name": "BaseBdev2",
00:12:44.901        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:44.901        "is_configured": true,
00:12:44.901        "data_offset": 0,
00:12:44.901        "data_size": 65536
00:12:44.901      }
00:12:44.901    ]
00:12:44.901  }'
00:12:44.901    11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:44.901   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:44.901    11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:44.901   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:44.901   11:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:12:45.836  [2024-12-16 11:34:11.600214] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:12:45.836  [2024-12-16 11:34:11.600360] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:12:45.836  [2024-12-16 11:34:11.600429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:45.836   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:12:45.836   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:45.836   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:45.836   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:45.836   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:45.836   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:45.836    11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:45.836    11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:45.836    11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:45.836    11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:45.836    11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:45.836   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:45.836    "name": "raid_bdev1",
00:12:45.836    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:45.836    "strip_size_kb": 0,
00:12:45.836    "state": "online",
00:12:45.836    "raid_level": "raid1",
00:12:45.836    "superblock": false,
00:12:45.836    "num_base_bdevs": 2,
00:12:45.836    "num_base_bdevs_discovered": 2,
00:12:45.836    "num_base_bdevs_operational": 2,
00:12:45.836    "base_bdevs_list": [
00:12:45.836      {
00:12:45.836        "name": "spare",
00:12:45.836        "uuid": "0c699582-a489-57d3-980b-ee7a6bf9ea27",
00:12:45.836        "is_configured": true,
00:12:45.836        "data_offset": 0,
00:12:45.836        "data_size": 65536
00:12:45.836      },
00:12:45.836      {
00:12:45.836        "name": "BaseBdev2",
00:12:45.836        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:45.836        "is_configured": true,
00:12:45.836        "data_offset": 0,
00:12:45.836        "data_size": 65536
00:12:45.836      }
00:12:45.836    ]
00:12:45.836  }'
00:12:45.836    11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:12:46.094    11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:12:46.094   11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:46.094    11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:46.094    11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.094    11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:46.094    11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:46.095    11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:46.095    "name": "raid_bdev1",
00:12:46.095    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:46.095    "strip_size_kb": 0,
00:12:46.095    "state": "online",
00:12:46.095    "raid_level": "raid1",
00:12:46.095    "superblock": false,
00:12:46.095    "num_base_bdevs": 2,
00:12:46.095    "num_base_bdevs_discovered": 2,
00:12:46.095    "num_base_bdevs_operational": 2,
00:12:46.095    "base_bdevs_list": [
00:12:46.095      {
00:12:46.095        "name": "spare",
00:12:46.095        "uuid": "0c699582-a489-57d3-980b-ee7a6bf9ea27",
00:12:46.095        "is_configured": true,
00:12:46.095        "data_offset": 0,
00:12:46.095        "data_size": 65536
00:12:46.095      },
00:12:46.095      {
00:12:46.095        "name": "BaseBdev2",
00:12:46.095        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:46.095        "is_configured": true,
00:12:46.095        "data_offset": 0,
00:12:46.095        "data_size": 65536
00:12:46.095      }
00:12:46.095    ]
00:12:46.095  }'
00:12:46.095    11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:12:46.095    11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:46.095   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:46.095    11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:46.095    11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:46.095    11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.095    11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:46.095    11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.353   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:46.353    "name": "raid_bdev1",
00:12:46.353    "uuid": "7291e634-428b-4061-a4a1-b074192e04b8",
00:12:46.353    "strip_size_kb": 0,
00:12:46.353    "state": "online",
00:12:46.353    "raid_level": "raid1",
00:12:46.353    "superblock": false,
00:12:46.353    "num_base_bdevs": 2,
00:12:46.353    "num_base_bdevs_discovered": 2,
00:12:46.353    "num_base_bdevs_operational": 2,
00:12:46.353    "base_bdevs_list": [
00:12:46.353      {
00:12:46.353        "name": "spare",
00:12:46.353        "uuid": "0c699582-a489-57d3-980b-ee7a6bf9ea27",
00:12:46.353        "is_configured": true,
00:12:46.353        "data_offset": 0,
00:12:46.353        "data_size": 65536
00:12:46.353      },
00:12:46.353      {
00:12:46.353        "name": "BaseBdev2",
00:12:46.353        "uuid": "9ed44adb-5f7e-5569-bf74-eca6f4eab503",
00:12:46.353        "is_configured": true,
00:12:46.353        "data_offset": 0,
00:12:46.353        "data_size": 65536
00:12:46.353      }
00:12:46.353    ]
00:12:46.353  }'
00:12:46.353   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:46.353   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:46.612  [2024-12-16 11:34:12.563213] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:12:46.612  [2024-12-16 11:34:12.563292] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:12:46.612  [2024-12-16 11:34:12.563414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:12:46.612  [2024-12-16 11:34:12.563528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:12:46.612  [2024-12-16 11:34:12.563614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.612    11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:46.612    11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length
00:12:46.612    11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:46.612    11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:46.612    11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:12:46.612   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:12:46.871  /dev/nbd0
00:12:46.871    11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:46.871  1+0 records in
00:12:46.871  1+0 records out
00:12:46.871  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530538 s, 7.7 MB/s
00:12:46.871    11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:12:46.871   11:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:12:47.130  /dev/nbd1
00:12:47.130    11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:12:47.130   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:47.130  1+0 records in
00:12:47.130  1+0 records out
00:12:47.130  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441021 s, 9.3 MB/s
00:12:47.130    11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:47.131   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:12:47.389    11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:47.389   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:12:47.648    11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']'
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86349
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86349 ']'
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86349
00:12:47.648    11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:12:47.648    11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86349
00:12:47.648  killing process with pid 86349
00:12:47.648  Received shutdown signal, test time was about 60.000000 seconds
00:12:47.648  
00:12:47.648                                                                                                  Latency(us)
00:12:47.648  
[2024-12-16T11:34:13.715Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:47.648  
[2024-12-16T11:34:13.715Z]  ===================================================================================================================
00:12:47.648  
[2024-12-16T11:34:13.715Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86349'
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86349
00:12:47.648   11:34:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86349
00:12:47.648  [2024-12-16 11:34:13.677204] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:12:47.907  [2024-12-16 11:34:13.740539] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:12:48.166   11:34:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0
00:12:48.166  
00:12:48.166  real	0m14.081s
00:12:48.166  user	0m16.452s
00:12:48.166  sys	0m2.852s
00:12:48.166   11:34:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:12:48.166   11:34:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:12:48.166  ************************************
00:12:48.166  END TEST raid_rebuild_test
00:12:48.166  ************************************
00:12:48.166   11:34:14 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true
00:12:48.166   11:34:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:12:48.166   11:34:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:12:48.166   11:34:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:12:48.166  ************************************
00:12:48.166  START TEST raid_rebuild_test_sb
00:12:48.166  ************************************
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:12:48.167    11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86760
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86760
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86760 ']'
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:12:48.167  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:12:48.167   11:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:48.426  I/O size of 3145728 is greater than zero copy threshold (65536).
00:12:48.426  Zero copy mechanism will not be used.
00:12:48.426  [2024-12-16 11:34:14.277953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:12:48.426  [2024-12-16 11:34:14.278097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86760 ]
00:12:48.426  [2024-12-16 11:34:14.436483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:48.426  [2024-12-16 11:34:14.484257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:12:48.684  [2024-12-16 11:34:14.527523] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:48.684  [2024-12-16 11:34:14.527660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  BaseBdev1_malloc
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  [2024-12-16 11:34:15.166301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:12:49.284  [2024-12-16 11:34:15.166378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:49.284  [2024-12-16 11:34:15.166409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:12:49.284  [2024-12-16 11:34:15.166427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:49.284  [2024-12-16 11:34:15.168809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:49.284  [2024-12-16 11:34:15.168848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:12:49.284  BaseBdev1
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  BaseBdev2_malloc
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  [2024-12-16 11:34:15.205353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:12:49.284  [2024-12-16 11:34:15.205506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:49.284  [2024-12-16 11:34:15.205556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:12:49.284  [2024-12-16 11:34:15.205571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:49.284  [2024-12-16 11:34:15.208439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:49.284  [2024-12-16 11:34:15.208478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:12:49.284  BaseBdev2
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  spare_malloc
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  spare_delay
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  [2024-12-16 11:34:15.246161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:12:49.284  [2024-12-16 11:34:15.246223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:49.284  [2024-12-16 11:34:15.246248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:12:49.284  [2024-12-16 11:34:15.246258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:49.284  [2024-12-16 11:34:15.248595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:49.284  [2024-12-16 11:34:15.248643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:12:49.284  spare
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.284  [2024-12-16 11:34:15.258174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:12:49.284  [2024-12-16 11:34:15.260267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:12:49.284  [2024-12-16 11:34:15.260504] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:12:49.284  [2024-12-16 11:34:15.260526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:12:49.284  [2024-12-16 11:34:15.260843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:12:49.284  [2024-12-16 11:34:15.260999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:12:49.284  [2024-12-16 11:34:15.261019] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:12:49.284  [2024-12-16 11:34:15.261159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:49.284   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:12:49.285   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:49.285   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:49.285   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:49.285   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:49.285    11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:49.285    11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:49.285    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.285    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.285    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.285   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:49.285    "name": "raid_bdev1",
00:12:49.285    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:49.285    "strip_size_kb": 0,
00:12:49.285    "state": "online",
00:12:49.285    "raid_level": "raid1",
00:12:49.285    "superblock": true,
00:12:49.285    "num_base_bdevs": 2,
00:12:49.285    "num_base_bdevs_discovered": 2,
00:12:49.285    "num_base_bdevs_operational": 2,
00:12:49.285    "base_bdevs_list": [
00:12:49.285      {
00:12:49.285        "name": "BaseBdev1",
00:12:49.285        "uuid": "88a0b013-8cc9-51bb-b72c-eecf8f88ae41",
00:12:49.285        "is_configured": true,
00:12:49.285        "data_offset": 2048,
00:12:49.285        "data_size": 63488
00:12:49.285      },
00:12:49.285      {
00:12:49.285        "name": "BaseBdev2",
00:12:49.285        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:49.285        "is_configured": true,
00:12:49.285        "data_offset": 2048,
00:12:49.285        "data_size": 63488
00:12:49.285      }
00:12:49.285    ]
00:12:49.285  }'
00:12:49.285   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:49.285   11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:12:49.851  [2024-12-16 11:34:15.733816] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.851   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:49.851    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:49.852    11:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:12:49.852   11:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:12:50.110  [2024-12-16 11:34:16.001060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:12:50.110  /dev/nbd0
00:12:50.110    11:34:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:12:50.110   11:34:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:12:50.110   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:12:50.110   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:12:50.110   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:12:50.111  1+0 records in
00:12:50.111  1+0 records out
00:12:50.111  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564473 s, 7.3 MB/s
00:12:50.111    11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']'
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1
00:12:50.111   11:34:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct
00:12:54.334  63488+0 records in
00:12:54.334  63488+0 records out
00:12:54.334  32505856 bytes (33 MB, 31 MiB) copied, 3.91743 s, 8.3 MB/s
00:12:54.334   11:34:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:12:54.334   11:34:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:12:54.334   11:34:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:12:54.334   11:34:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:12:54.334   11:34:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:12:54.334   11:34:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:12:54.334   11:34:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:12:54.334  [2024-12-16 11:34:20.192467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:54.334    11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:54.334  [2024-12-16 11:34:20.224517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:54.334   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:54.334    11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:54.334    11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:54.334    11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:54.334    11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:54.335    11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:54.335   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:54.335    "name": "raid_bdev1",
00:12:54.335    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:54.335    "strip_size_kb": 0,
00:12:54.335    "state": "online",
00:12:54.335    "raid_level": "raid1",
00:12:54.335    "superblock": true,
00:12:54.335    "num_base_bdevs": 2,
00:12:54.335    "num_base_bdevs_discovered": 1,
00:12:54.335    "num_base_bdevs_operational": 1,
00:12:54.335    "base_bdevs_list": [
00:12:54.335      {
00:12:54.335        "name": null,
00:12:54.335        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:54.335        "is_configured": false,
00:12:54.335        "data_offset": 0,
00:12:54.335        "data_size": 63488
00:12:54.335      },
00:12:54.335      {
00:12:54.335        "name": "BaseBdev2",
00:12:54.335        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:54.335        "is_configured": true,
00:12:54.335        "data_offset": 2048,
00:12:54.335        "data_size": 63488
00:12:54.335      }
00:12:54.335    ]
00:12:54.335  }'
00:12:54.335   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:54.335   11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:54.903   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:12:54.903   11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:54.903   11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:54.903  [2024-12-16 11:34:20.711709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:12:54.903  [2024-12-16 11:34:20.716113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0
00:12:54.903   11:34:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:54.903   11:34:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1
00:12:54.903  [2024-12-16 11:34:20.718293] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:12:55.839   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:55.839   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:55.839   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:55.839   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:55.839   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:55.839    11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:55.839    11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:55.839    11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.839    11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:55.839    11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:55.839   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:55.839    "name": "raid_bdev1",
00:12:55.839    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:55.839    "strip_size_kb": 0,
00:12:55.839    "state": "online",
00:12:55.839    "raid_level": "raid1",
00:12:55.839    "superblock": true,
00:12:55.839    "num_base_bdevs": 2,
00:12:55.839    "num_base_bdevs_discovered": 2,
00:12:55.839    "num_base_bdevs_operational": 2,
00:12:55.839    "process": {
00:12:55.839      "type": "rebuild",
00:12:55.839      "target": "spare",
00:12:55.839      "progress": {
00:12:55.839        "blocks": 20480,
00:12:55.839        "percent": 32
00:12:55.839      }
00:12:55.839    },
00:12:55.839    "base_bdevs_list": [
00:12:55.839      {
00:12:55.839        "name": "spare",
00:12:55.839        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:12:55.839        "is_configured": true,
00:12:55.839        "data_offset": 2048,
00:12:55.839        "data_size": 63488
00:12:55.839      },
00:12:55.840      {
00:12:55.840        "name": "BaseBdev2",
00:12:55.840        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:55.840        "is_configured": true,
00:12:55.840        "data_offset": 2048,
00:12:55.840        "data_size": 63488
00:12:55.840      }
00:12:55.840    ]
00:12:55.840  }'
00:12:55.840    11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:55.840   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:55.840    11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:55.840   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:55.840   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:12:55.840   11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:55.840   11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:55.840  [2024-12-16 11:34:21.902701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:12:56.098  [2024-12-16 11:34:21.923282] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:12:56.098  [2024-12-16 11:34:21.923399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:12:56.098  [2024-12-16 11:34:21.923442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:12:56.098  [2024-12-16 11:34:21.923481] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:12:56.098    11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:56.098    11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:56.098    11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:56.098    11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:56.098    11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:12:56.098    "name": "raid_bdev1",
00:12:56.098    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:56.098    "strip_size_kb": 0,
00:12:56.098    "state": "online",
00:12:56.098    "raid_level": "raid1",
00:12:56.098    "superblock": true,
00:12:56.098    "num_base_bdevs": 2,
00:12:56.098    "num_base_bdevs_discovered": 1,
00:12:56.098    "num_base_bdevs_operational": 1,
00:12:56.098    "base_bdevs_list": [
00:12:56.098      {
00:12:56.098        "name": null,
00:12:56.098        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:56.098        "is_configured": false,
00:12:56.098        "data_offset": 0,
00:12:56.098        "data_size": 63488
00:12:56.098      },
00:12:56.098      {
00:12:56.098        "name": "BaseBdev2",
00:12:56.098        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:56.098        "is_configured": true,
00:12:56.098        "data_offset": 2048,
00:12:56.098        "data_size": 63488
00:12:56.098      }
00:12:56.098    ]
00:12:56.098  }'
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:12:56.098   11:34:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:56.357   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:12:56.357   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:56.357   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:12:56.357   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:12:56.357   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:56.357    11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:56.357    11:34:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:56.357    11:34:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:56.357    11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:56.357    11:34:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:56.616    "name": "raid_bdev1",
00:12:56.616    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:56.616    "strip_size_kb": 0,
00:12:56.616    "state": "online",
00:12:56.616    "raid_level": "raid1",
00:12:56.616    "superblock": true,
00:12:56.616    "num_base_bdevs": 2,
00:12:56.616    "num_base_bdevs_discovered": 1,
00:12:56.616    "num_base_bdevs_operational": 1,
00:12:56.616    "base_bdevs_list": [
00:12:56.616      {
00:12:56.616        "name": null,
00:12:56.616        "uuid": "00000000-0000-0000-0000-000000000000",
00:12:56.616        "is_configured": false,
00:12:56.616        "data_offset": 0,
00:12:56.616        "data_size": 63488
00:12:56.616      },
00:12:56.616      {
00:12:56.616        "name": "BaseBdev2",
00:12:56.616        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:56.616        "is_configured": true,
00:12:56.616        "data_offset": 2048,
00:12:56.616        "data_size": 63488
00:12:56.616      }
00:12:56.616    ]
00:12:56.616  }'
00:12:56.616    11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:12:56.616    11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:56.616  [2024-12-16 11:34:22.531261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:12:56.616  [2024-12-16 11:34:22.535530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:56.616   11:34:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1
00:12:56.616  [2024-12-16 11:34:22.537522] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:12:57.552   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:57.552   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:57.552   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:57.552   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:57.552   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:57.552    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:57.552    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:57.552    11:34:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:57.552    11:34:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:57.552    11:34:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:57.552   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:57.552    "name": "raid_bdev1",
00:12:57.552    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:57.552    "strip_size_kb": 0,
00:12:57.552    "state": "online",
00:12:57.552    "raid_level": "raid1",
00:12:57.552    "superblock": true,
00:12:57.552    "num_base_bdevs": 2,
00:12:57.552    "num_base_bdevs_discovered": 2,
00:12:57.552    "num_base_bdevs_operational": 2,
00:12:57.552    "process": {
00:12:57.552      "type": "rebuild",
00:12:57.552      "target": "spare",
00:12:57.552      "progress": {
00:12:57.552        "blocks": 20480,
00:12:57.552        "percent": 32
00:12:57.552      }
00:12:57.552    },
00:12:57.552    "base_bdevs_list": [
00:12:57.552      {
00:12:57.552        "name": "spare",
00:12:57.552        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:12:57.552        "is_configured": true,
00:12:57.552        "data_offset": 2048,
00:12:57.552        "data_size": 63488
00:12:57.552      },
00:12:57.552      {
00:12:57.552        "name": "BaseBdev2",
00:12:57.552        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:57.552        "is_configured": true,
00:12:57.552        "data_offset": 2048,
00:12:57.552        "data_size": 63488
00:12:57.552      }
00:12:57.552    ]
00:12:57.552  }'
00:12:57.552    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:12:57.811  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']'
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=316
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:57.811    "name": "raid_bdev1",
00:12:57.811    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:57.811    "strip_size_kb": 0,
00:12:57.811    "state": "online",
00:12:57.811    "raid_level": "raid1",
00:12:57.811    "superblock": true,
00:12:57.811    "num_base_bdevs": 2,
00:12:57.811    "num_base_bdevs_discovered": 2,
00:12:57.811    "num_base_bdevs_operational": 2,
00:12:57.811    "process": {
00:12:57.811      "type": "rebuild",
00:12:57.811      "target": "spare",
00:12:57.811      "progress": {
00:12:57.811        "blocks": 22528,
00:12:57.811        "percent": 35
00:12:57.811      }
00:12:57.811    },
00:12:57.811    "base_bdevs_list": [
00:12:57.811      {
00:12:57.811        "name": "spare",
00:12:57.811        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:12:57.811        "is_configured": true,
00:12:57.811        "data_offset": 2048,
00:12:57.811        "data_size": 63488
00:12:57.811      },
00:12:57.811      {
00:12:57.811        "name": "BaseBdev2",
00:12:57.811        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:57.811        "is_configured": true,
00:12:57.811        "data_offset": 2048,
00:12:57.811        "data_size": 63488
00:12:57.811      }
00:12:57.811    ]
00:12:57.811  }'
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:57.811    11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:57.811   11:34:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:12:59.190    11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:12:59.190    11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:12:59.190    11:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:12:59.190    11:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:12:59.190    11:34:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:12:59.190    "name": "raid_bdev1",
00:12:59.190    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:12:59.190    "strip_size_kb": 0,
00:12:59.190    "state": "online",
00:12:59.190    "raid_level": "raid1",
00:12:59.190    "superblock": true,
00:12:59.190    "num_base_bdevs": 2,
00:12:59.190    "num_base_bdevs_discovered": 2,
00:12:59.190    "num_base_bdevs_operational": 2,
00:12:59.190    "process": {
00:12:59.190      "type": "rebuild",
00:12:59.190      "target": "spare",
00:12:59.190      "progress": {
00:12:59.190        "blocks": 47104,
00:12:59.190        "percent": 74
00:12:59.190      }
00:12:59.190    },
00:12:59.190    "base_bdevs_list": [
00:12:59.190      {
00:12:59.190        "name": "spare",
00:12:59.190        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:12:59.190        "is_configured": true,
00:12:59.190        "data_offset": 2048,
00:12:59.190        "data_size": 63488
00:12:59.190      },
00:12:59.190      {
00:12:59.190        "name": "BaseBdev2",
00:12:59.190        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:12:59.190        "is_configured": true,
00:12:59.190        "data_offset": 2048,
00:12:59.190        "data_size": 63488
00:12:59.190      }
00:12:59.190    ]
00:12:59.190  }'
00:12:59.190    11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:12:59.190    11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:12:59.190   11:34:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:12:59.760  [2024-12-16 11:34:25.649294] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:12:59.760  [2024-12-16 11:34:25.649381] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:12:59.760  [2024-12-16 11:34:25.649486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:00.020   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:00.020   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:00.020   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:00.020   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:00.020   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:00.020   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:00.020    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:00.020    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.020    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:00.020    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:00.020    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.020   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:00.020    "name": "raid_bdev1",
00:13:00.020    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:00.020    "strip_size_kb": 0,
00:13:00.020    "state": "online",
00:13:00.020    "raid_level": "raid1",
00:13:00.020    "superblock": true,
00:13:00.020    "num_base_bdevs": 2,
00:13:00.020    "num_base_bdevs_discovered": 2,
00:13:00.020    "num_base_bdevs_operational": 2,
00:13:00.020    "base_bdevs_list": [
00:13:00.020      {
00:13:00.020        "name": "spare",
00:13:00.020        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:13:00.020        "is_configured": true,
00:13:00.020        "data_offset": 2048,
00:13:00.020        "data_size": 63488
00:13:00.020      },
00:13:00.020      {
00:13:00.020        "name": "BaseBdev2",
00:13:00.020        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:00.020        "is_configured": true,
00:13:00.020        "data_offset": 2048,
00:13:00.020        "data_size": 63488
00:13:00.020      }
00:13:00.020    ]
00:13:00.020  }'
00:13:00.020    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:00.287    "name": "raid_bdev1",
00:13:00.287    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:00.287    "strip_size_kb": 0,
00:13:00.287    "state": "online",
00:13:00.287    "raid_level": "raid1",
00:13:00.287    "superblock": true,
00:13:00.287    "num_base_bdevs": 2,
00:13:00.287    "num_base_bdevs_discovered": 2,
00:13:00.287    "num_base_bdevs_operational": 2,
00:13:00.287    "base_bdevs_list": [
00:13:00.287      {
00:13:00.287        "name": "spare",
00:13:00.287        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:13:00.287        "is_configured": true,
00:13:00.287        "data_offset": 2048,
00:13:00.287        "data_size": 63488
00:13:00.287      },
00:13:00.287      {
00:13:00.287        "name": "BaseBdev2",
00:13:00.287        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:00.287        "is_configured": true,
00:13:00.287        "data_offset": 2048,
00:13:00.287        "data_size": 63488
00:13:00.287      }
00:13:00.287    ]
00:13:00.287  }'
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:00.287   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:00.287    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.557   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:00.557    "name": "raid_bdev1",
00:13:00.557    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:00.557    "strip_size_kb": 0,
00:13:00.557    "state": "online",
00:13:00.557    "raid_level": "raid1",
00:13:00.557    "superblock": true,
00:13:00.557    "num_base_bdevs": 2,
00:13:00.557    "num_base_bdevs_discovered": 2,
00:13:00.557    "num_base_bdevs_operational": 2,
00:13:00.557    "base_bdevs_list": [
00:13:00.557      {
00:13:00.557        "name": "spare",
00:13:00.557        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:13:00.557        "is_configured": true,
00:13:00.557        "data_offset": 2048,
00:13:00.557        "data_size": 63488
00:13:00.557      },
00:13:00.557      {
00:13:00.557        "name": "BaseBdev2",
00:13:00.557        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:00.557        "is_configured": true,
00:13:00.557        "data_offset": 2048,
00:13:00.557        "data_size": 63488
00:13:00.557      }
00:13:00.557    ]
00:13:00.557  }'
00:13:00.557   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:00.557   11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:00.817   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:13:00.817   11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.817   11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:00.817  [2024-12-16 11:34:26.840007] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:13:00.817  [2024-12-16 11:34:26.840093] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:00.817  [2024-12-16 11:34:26.840238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:00.817  [2024-12-16 11:34:26.840376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:00.817  [2024-12-16 11:34:26.840442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:13:00.817   11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.817    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:00.817    11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length
00:13:00.817    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.817    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:00.817    11:34:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:01.078   11:34:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:13:01.078  /dev/nbd0
00:13:01.338    11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:01.338   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:01.339  1+0 records in
00:13:01.339  1+0 records out
00:13:01.339  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508967 s, 8.0 MB/s
00:13:01.339    11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:01.339   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:13:01.599  /dev/nbd1
00:13:01.599    11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:01.599  1+0 records in
00:13:01.599  1+0 records out
00:13:01.599  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305963 s, 13.4 MB/s
00:13:01.599    11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:01.599   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:13:01.859    11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:01.859   11:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:13:02.119    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.119  [2024-12-16 11:34:28.031342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:02.119  [2024-12-16 11:34:28.031458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:02.119  [2024-12-16 11:34:28.031489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:13:02.119  [2024-12-16 11:34:28.031504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:02.119  [2024-12-16 11:34:28.033869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:02.119  [2024-12-16 11:34:28.033961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:02.119  [2024-12-16 11:34:28.034065] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:13:02.119  [2024-12-16 11:34:28.034122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:02.119  [2024-12-16 11:34:28.034237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:02.119  spare
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.119  [2024-12-16 11:34:28.134146] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:13:02.119  [2024-12-16 11:34:28.134267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:13:02.119  [2024-12-16 11:34:28.134673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940
00:13:02.119  [2024-12-16 11:34:28.134862] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:13:02.119  [2024-12-16 11:34:28.134889] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:13:02.119  [2024-12-16 11:34:28.135083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.119   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:02.120   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:02.120    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:02.120    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:02.120    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.120    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.120    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.380   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:02.380    "name": "raid_bdev1",
00:13:02.380    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:02.380    "strip_size_kb": 0,
00:13:02.380    "state": "online",
00:13:02.380    "raid_level": "raid1",
00:13:02.380    "superblock": true,
00:13:02.380    "num_base_bdevs": 2,
00:13:02.380    "num_base_bdevs_discovered": 2,
00:13:02.380    "num_base_bdevs_operational": 2,
00:13:02.380    "base_bdevs_list": [
00:13:02.380      {
00:13:02.380        "name": "spare",
00:13:02.380        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:13:02.380        "is_configured": true,
00:13:02.380        "data_offset": 2048,
00:13:02.380        "data_size": 63488
00:13:02.380      },
00:13:02.380      {
00:13:02.380        "name": "BaseBdev2",
00:13:02.380        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:02.380        "is_configured": true,
00:13:02.380        "data_offset": 2048,
00:13:02.380        "data_size": 63488
00:13:02.380      }
00:13:02.380    ]
00:13:02.380  }'
00:13:02.380   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:02.380   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:02.642    "name": "raid_bdev1",
00:13:02.642    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:02.642    "strip_size_kb": 0,
00:13:02.642    "state": "online",
00:13:02.642    "raid_level": "raid1",
00:13:02.642    "superblock": true,
00:13:02.642    "num_base_bdevs": 2,
00:13:02.642    "num_base_bdevs_discovered": 2,
00:13:02.642    "num_base_bdevs_operational": 2,
00:13:02.642    "base_bdevs_list": [
00:13:02.642      {
00:13:02.642        "name": "spare",
00:13:02.642        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:13:02.642        "is_configured": true,
00:13:02.642        "data_offset": 2048,
00:13:02.642        "data_size": 63488
00:13:02.642      },
00:13:02.642      {
00:13:02.642        "name": "BaseBdev2",
00:13:02.642        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:02.642        "is_configured": true,
00:13:02.642        "data_offset": 2048,
00:13:02.642        "data_size": 63488
00:13:02.642      }
00:13:02.642    ]
00:13:02.642  }'
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:02.642   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:13:02.642    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.903  [2024-12-16 11:34:28.722172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:02.903   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:02.904   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:02.904    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:02.904    11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:02.904    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.904    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:02.904    11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.904   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:02.904    "name": "raid_bdev1",
00:13:02.904    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:02.904    "strip_size_kb": 0,
00:13:02.904    "state": "online",
00:13:02.904    "raid_level": "raid1",
00:13:02.904    "superblock": true,
00:13:02.904    "num_base_bdevs": 2,
00:13:02.904    "num_base_bdevs_discovered": 1,
00:13:02.904    "num_base_bdevs_operational": 1,
00:13:02.904    "base_bdevs_list": [
00:13:02.904      {
00:13:02.904        "name": null,
00:13:02.904        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:02.904        "is_configured": false,
00:13:02.904        "data_offset": 0,
00:13:02.904        "data_size": 63488
00:13:02.904      },
00:13:02.904      {
00:13:02.904        "name": "BaseBdev2",
00:13:02.904        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:02.904        "is_configured": true,
00:13:02.904        "data_offset": 2048,
00:13:02.904        "data_size": 63488
00:13:02.904      }
00:13:02.904    ]
00:13:02.904  }'
00:13:02.904   11:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:02.904   11:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:03.163   11:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:03.163   11:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:03.163   11:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:03.163  [2024-12-16 11:34:29.165451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:03.163  [2024-12-16 11:34:29.165755] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:13:03.163  [2024-12-16 11:34:29.165835] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:13:03.163  [2024-12-16 11:34:29.165917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:03.163  [2024-12-16 11:34:29.170179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10
00:13:03.163   11:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:03.163   11:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1
00:13:03.163  [2024-12-16 11:34:29.172334] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:04.550   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:04.550   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:04.550   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:04.550   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:04.550   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:04.550    11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:04.550    11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.550    11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:04.550    11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:04.550    11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.550   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:04.551    "name": "raid_bdev1",
00:13:04.551    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:04.551    "strip_size_kb": 0,
00:13:04.551    "state": "online",
00:13:04.551    "raid_level": "raid1",
00:13:04.551    "superblock": true,
00:13:04.551    "num_base_bdevs": 2,
00:13:04.551    "num_base_bdevs_discovered": 2,
00:13:04.551    "num_base_bdevs_operational": 2,
00:13:04.551    "process": {
00:13:04.551      "type": "rebuild",
00:13:04.551      "target": "spare",
00:13:04.551      "progress": {
00:13:04.551        "blocks": 20480,
00:13:04.551        "percent": 32
00:13:04.551      }
00:13:04.551    },
00:13:04.551    "base_bdevs_list": [
00:13:04.551      {
00:13:04.551        "name": "spare",
00:13:04.551        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:13:04.551        "is_configured": true,
00:13:04.551        "data_offset": 2048,
00:13:04.551        "data_size": 63488
00:13:04.551      },
00:13:04.551      {
00:13:04.551        "name": "BaseBdev2",
00:13:04.551        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:04.551        "is_configured": true,
00:13:04.551        "data_offset": 2048,
00:13:04.551        "data_size": 63488
00:13:04.551      }
00:13:04.551    ]
00:13:04.551  }'
00:13:04.551    11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:04.551    11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:04.551  [2024-12-16 11:34:30.308151] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:04.551  [2024-12-16 11:34:30.376929] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:13:04.551  [2024-12-16 11:34:30.376990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:04.551  [2024-12-16 11:34:30.377008] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:04.551  [2024-12-16 11:34:30.377016] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:04.551    11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:04.551    11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:04.551    11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.551    11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:04.551    11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:04.551    "name": "raid_bdev1",
00:13:04.551    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:04.551    "strip_size_kb": 0,
00:13:04.551    "state": "online",
00:13:04.551    "raid_level": "raid1",
00:13:04.551    "superblock": true,
00:13:04.551    "num_base_bdevs": 2,
00:13:04.551    "num_base_bdevs_discovered": 1,
00:13:04.551    "num_base_bdevs_operational": 1,
00:13:04.551    "base_bdevs_list": [
00:13:04.551      {
00:13:04.551        "name": null,
00:13:04.551        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:04.551        "is_configured": false,
00:13:04.551        "data_offset": 0,
00:13:04.551        "data_size": 63488
00:13:04.551      },
00:13:04.551      {
00:13:04.551        "name": "BaseBdev2",
00:13:04.551        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:04.551        "is_configured": true,
00:13:04.551        "data_offset": 2048,
00:13:04.551        "data_size": 63488
00:13:04.551      }
00:13:04.551    ]
00:13:04.551  }'
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:04.551   11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:04.811   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:04.811   11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.811   11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:04.811  [2024-12-16 11:34:30.828768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:04.811  [2024-12-16 11:34:30.828841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:04.811  [2024-12-16 11:34:30.828869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:13:04.811  [2024-12-16 11:34:30.828879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:04.811  [2024-12-16 11:34:30.829361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:04.811  [2024-12-16 11:34:30.829394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:04.811  [2024-12-16 11:34:30.829492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:13:04.811  [2024-12-16 11:34:30.829510] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:13:04.811  [2024-12-16 11:34:30.829528] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:13:04.811  [2024-12-16 11:34:30.829568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:04.811  [2024-12-16 11:34:30.833834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0
00:13:04.811  spare
00:13:04.811   11:34:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.811   11:34:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1
00:13:04.811  [2024-12-16 11:34:30.836015] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:06.191   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:06.191   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:06.191   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:06.191   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:06.191   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:06.191    11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:06.191    11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:06.191    11:34:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:06.191    11:34:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:06.191    11:34:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:06.191   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:06.191    "name": "raid_bdev1",
00:13:06.191    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:06.191    "strip_size_kb": 0,
00:13:06.191    "state": "online",
00:13:06.191    "raid_level": "raid1",
00:13:06.191    "superblock": true,
00:13:06.191    "num_base_bdevs": 2,
00:13:06.191    "num_base_bdevs_discovered": 2,
00:13:06.191    "num_base_bdevs_operational": 2,
00:13:06.191    "process": {
00:13:06.191      "type": "rebuild",
00:13:06.191      "target": "spare",
00:13:06.191      "progress": {
00:13:06.191        "blocks": 20480,
00:13:06.192        "percent": 32
00:13:06.192      }
00:13:06.192    },
00:13:06.192    "base_bdevs_list": [
00:13:06.192      {
00:13:06.192        "name": "spare",
00:13:06.192        "uuid": "4fd6276d-5e7b-5cbb-bf1f-66edccdbc9cd",
00:13:06.192        "is_configured": true,
00:13:06.192        "data_offset": 2048,
00:13:06.192        "data_size": 63488
00:13:06.192      },
00:13:06.192      {
00:13:06.192        "name": "BaseBdev2",
00:13:06.192        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:06.192        "is_configured": true,
00:13:06.192        "data_offset": 2048,
00:13:06.192        "data_size": 63488
00:13:06.192      }
00:13:06.192    ]
00:13:06.192  }'
00:13:06.192    11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:06.192   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:06.192    11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:06.192   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:06.192   11:34:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:13:06.192   11:34:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:06.192   11:34:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:06.192  [2024-12-16 11:34:31.999986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:06.192  [2024-12-16 11:34:32.040908] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:13:06.192  [2024-12-16 11:34:32.040988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:06.192  [2024-12-16 11:34:32.041006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:06.192  [2024-12-16 11:34:32.041016] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:06.192    11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:06.192    11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:06.192    11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:06.192    11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:06.192    11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:06.192    "name": "raid_bdev1",
00:13:06.192    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:06.192    "strip_size_kb": 0,
00:13:06.192    "state": "online",
00:13:06.192    "raid_level": "raid1",
00:13:06.192    "superblock": true,
00:13:06.192    "num_base_bdevs": 2,
00:13:06.192    "num_base_bdevs_discovered": 1,
00:13:06.192    "num_base_bdevs_operational": 1,
00:13:06.192    "base_bdevs_list": [
00:13:06.192      {
00:13:06.192        "name": null,
00:13:06.192        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:06.192        "is_configured": false,
00:13:06.192        "data_offset": 0,
00:13:06.192        "data_size": 63488
00:13:06.192      },
00:13:06.192      {
00:13:06.192        "name": "BaseBdev2",
00:13:06.192        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:06.192        "is_configured": true,
00:13:06.192        "data_offset": 2048,
00:13:06.192        "data_size": 63488
00:13:06.192      }
00:13:06.192    ]
00:13:06.192  }'
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:06.192   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:06.451   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:06.451   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:06.451   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:06.451   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:06.451   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:06.451    11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:06.451    11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:06.451    11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:06.451    11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:06.451    11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:06.711    "name": "raid_bdev1",
00:13:06.711    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:06.711    "strip_size_kb": 0,
00:13:06.711    "state": "online",
00:13:06.711    "raid_level": "raid1",
00:13:06.711    "superblock": true,
00:13:06.711    "num_base_bdevs": 2,
00:13:06.711    "num_base_bdevs_discovered": 1,
00:13:06.711    "num_base_bdevs_operational": 1,
00:13:06.711    "base_bdevs_list": [
00:13:06.711      {
00:13:06.711        "name": null,
00:13:06.711        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:06.711        "is_configured": false,
00:13:06.711        "data_offset": 0,
00:13:06.711        "data_size": 63488
00:13:06.711      },
00:13:06.711      {
00:13:06.711        "name": "BaseBdev2",
00:13:06.711        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:06.711        "is_configured": true,
00:13:06.711        "data_offset": 2048,
00:13:06.711        "data_size": 63488
00:13:06.711      }
00:13:06.711    ]
00:13:06.711  }'
00:13:06.711    11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:06.711    11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:13:06.711   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:06.712   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:06.712  [2024-12-16 11:34:32.652575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:13:06.712  [2024-12-16 11:34:32.652639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:06.712  [2024-12-16 11:34:32.652661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:13:06.712  [2024-12-16 11:34:32.652690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:06.712  [2024-12-16 11:34:32.653125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:06.712  [2024-12-16 11:34:32.653154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:13:06.712  [2024-12-16 11:34:32.653233] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:13:06.712  [2024-12-16 11:34:32.653255] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:13:06.712  [2024-12-16 11:34:32.653264] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:13:06.712  [2024-12-16 11:34:32.653302] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:13:06.712  BaseBdev1
00:13:06.712   11:34:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:06.712   11:34:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:07.662    11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:07.662    11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:07.662    11:34:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:07.662    11:34:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:07.662    11:34:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:07.662    "name": "raid_bdev1",
00:13:07.662    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:07.662    "strip_size_kb": 0,
00:13:07.662    "state": "online",
00:13:07.662    "raid_level": "raid1",
00:13:07.662    "superblock": true,
00:13:07.662    "num_base_bdevs": 2,
00:13:07.662    "num_base_bdevs_discovered": 1,
00:13:07.662    "num_base_bdevs_operational": 1,
00:13:07.662    "base_bdevs_list": [
00:13:07.662      {
00:13:07.662        "name": null,
00:13:07.662        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:07.662        "is_configured": false,
00:13:07.662        "data_offset": 0,
00:13:07.662        "data_size": 63488
00:13:07.662      },
00:13:07.662      {
00:13:07.662        "name": "BaseBdev2",
00:13:07.662        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:07.662        "is_configured": true,
00:13:07.662        "data_offset": 2048,
00:13:07.662        "data_size": 63488
00:13:07.662      }
00:13:07.662    ]
00:13:07.662  }'
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:07.662   11:34:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:08.231   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:08.231   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:08.231   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:08.231   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:08.231   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:08.231    11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:08.231    11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:08.231    11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:08.231    11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:08.231    11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:08.231   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:08.231    "name": "raid_bdev1",
00:13:08.231    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:08.231    "strip_size_kb": 0,
00:13:08.231    "state": "online",
00:13:08.231    "raid_level": "raid1",
00:13:08.231    "superblock": true,
00:13:08.231    "num_base_bdevs": 2,
00:13:08.231    "num_base_bdevs_discovered": 1,
00:13:08.231    "num_base_bdevs_operational": 1,
00:13:08.231    "base_bdevs_list": [
00:13:08.231      {
00:13:08.231        "name": null,
00:13:08.231        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:08.231        "is_configured": false,
00:13:08.231        "data_offset": 0,
00:13:08.231        "data_size": 63488
00:13:08.231      },
00:13:08.231      {
00:13:08.231        "name": "BaseBdev2",
00:13:08.231        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:08.231        "is_configured": true,
00:13:08.231        "data_offset": 2048,
00:13:08.231        "data_size": 63488
00:13:08.231      }
00:13:08.232    ]
00:13:08.232  }'
00:13:08.232    11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:08.232   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:08.232    11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:08.491    11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:08.491  [2024-12-16 11:34:34.309814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:08.491  [2024-12-16 11:34:34.310061] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:13:08.491  [2024-12-16 11:34:34.310128] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:13:08.491  request:
00:13:08.491  {
00:13:08.491  "base_bdev": "BaseBdev1",
00:13:08.491  "raid_bdev": "raid_bdev1",
00:13:08.491  "method": "bdev_raid_add_base_bdev",
00:13:08.491  "req_id": 1
00:13:08.491  }
00:13:08.491  Got JSON-RPC error response
00:13:08.491  response:
00:13:08.491  {
00:13:08.491  "code": -22,
00:13:08.491  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:13:08.491  }
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:13:08.491   11:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:09.430    11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:09.430    11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:09.430    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:09.430    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:09.430    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:09.430    "name": "raid_bdev1",
00:13:09.430    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:09.430    "strip_size_kb": 0,
00:13:09.430    "state": "online",
00:13:09.430    "raid_level": "raid1",
00:13:09.430    "superblock": true,
00:13:09.430    "num_base_bdevs": 2,
00:13:09.430    "num_base_bdevs_discovered": 1,
00:13:09.430    "num_base_bdevs_operational": 1,
00:13:09.430    "base_bdevs_list": [
00:13:09.430      {
00:13:09.430        "name": null,
00:13:09.430        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:09.430        "is_configured": false,
00:13:09.430        "data_offset": 0,
00:13:09.430        "data_size": 63488
00:13:09.430      },
00:13:09.430      {
00:13:09.430        "name": "BaseBdev2",
00:13:09.430        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:09.430        "is_configured": true,
00:13:09.430        "data_offset": 2048,
00:13:09.430        "data_size": 63488
00:13:09.430      }
00:13:09.430    ]
00:13:09.430  }'
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:09.430   11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:09.999   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:09.999   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:09.999   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:09.999   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:09.999   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:09.999    11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:09.999    11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:09.999    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:09.999    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:09.999    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:09.999   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:10.000    "name": "raid_bdev1",
00:13:10.000    "uuid": "91f894a4-9cd8-4919-a7c0-e317831cebe3",
00:13:10.000    "strip_size_kb": 0,
00:13:10.000    "state": "online",
00:13:10.000    "raid_level": "raid1",
00:13:10.000    "superblock": true,
00:13:10.000    "num_base_bdevs": 2,
00:13:10.000    "num_base_bdevs_discovered": 1,
00:13:10.000    "num_base_bdevs_operational": 1,
00:13:10.000    "base_bdevs_list": [
00:13:10.000      {
00:13:10.000        "name": null,
00:13:10.000        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:10.000        "is_configured": false,
00:13:10.000        "data_offset": 0,
00:13:10.000        "data_size": 63488
00:13:10.000      },
00:13:10.000      {
00:13:10.000        "name": "BaseBdev2",
00:13:10.000        "uuid": "b884d06d-1068-5e4e-9dcd-655129c11d2b",
00:13:10.000        "is_configured": true,
00:13:10.000        "data_offset": 2048,
00:13:10.000        "data_size": 63488
00:13:10.000      }
00:13:10.000    ]
00:13:10.000  }'
00:13:10.000    11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:10.000   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:10.000    11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:10.000   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:10.000   11:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86760
00:13:10.000   11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86760 ']'
00:13:10.000   11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86760
00:13:10.000    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname
00:13:10.000   11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:13:10.000    11:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86760
00:13:10.000   11:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:13:10.000  killing process with pid 86760
00:13:10.000  Received shutdown signal, test time was about 60.000000 seconds
00:13:10.000  
00:13:10.000                                                                                                  Latency(us)
00:13:10.000  
[2024-12-16T11:34:36.067Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:10.000  
[2024-12-16T11:34:36.067Z]  ===================================================================================================================
00:13:10.000  
[2024-12-16T11:34:36.067Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:13:10.000   11:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:13:10.000   11:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86760'
00:13:10.000   11:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86760
00:13:10.000  [2024-12-16 11:34:36.008273] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:10.000   11:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86760
00:13:10.000  [2024-12-16 11:34:36.008415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:10.000  [2024-12-16 11:34:36.008472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:10.000  [2024-12-16 11:34:36.008483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:13:10.000  [2024-12-16 11:34:36.041226] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:10.260   11:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0
00:13:10.260  
00:13:10.260  real	0m22.109s
00:13:10.260  user	0m27.716s
00:13:10.260  sys	0m3.579s
00:13:10.260   11:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:13:10.260   11:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:10.260  ************************************
00:13:10.260  END TEST raid_rebuild_test_sb
00:13:10.260  ************************************
00:13:10.520   11:34:36 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true
00:13:10.520   11:34:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:13:10.520   11:34:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:13:10.520   11:34:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:13:10.520  ************************************
00:13:10.520  START TEST raid_rebuild_test_io
00:13:10.520  ************************************
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:10.520    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']'
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87483
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87483
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87483 ']'
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:10.520  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable
00:13:10.520   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:10.521  [2024-12-16 11:34:36.469590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:13:10.521  [2024-12-16 11:34:36.469829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87483 ]
00:13:10.521  I/O size of 3145728 is greater than zero copy threshold (65536).
00:13:10.521  Zero copy mechanism will not be used.
00:13:10.780  [2024-12-16 11:34:36.618724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:10.780  [2024-12-16 11:34:36.669419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:13:10.780  [2024-12-16 11:34:36.713808] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:10.780  [2024-12-16 11:34:36.713943] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:10.780  BaseBdev1_malloc
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:10.780  [2024-12-16 11:34:36.802702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:13:10.780  [2024-12-16 11:34:36.802824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:10.780  [2024-12-16 11:34:36.802877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:13:10.780  [2024-12-16 11:34:36.802919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:10.780  [2024-12-16 11:34:36.805464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:10.780  [2024-12-16 11:34:36.805572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:13:10.780  BaseBdev1
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:10.780  BaseBdev2_malloc
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:10.780   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:10.780  [2024-12-16 11:34:36.842059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:13:10.780  [2024-12-16 11:34:36.842120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:10.780  [2024-12-16 11:34:36.842145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:13:10.780  [2024-12-16 11:34:36.842156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:10.780  [2024-12-16 11:34:36.844557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:10.780  [2024-12-16 11:34:36.844592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:13:11.040  BaseBdev2
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.040  spare_malloc
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.040  spare_delay
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.040  [2024-12-16 11:34:36.883367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:11.040  [2024-12-16 11:34:36.883433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:11.040  [2024-12-16 11:34:36.883460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:13:11.040  [2024-12-16 11:34:36.883470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:11.040  [2024-12-16 11:34:36.886025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:11.040  [2024-12-16 11:34:36.886067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:11.040  spare
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.040  [2024-12-16 11:34:36.895386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:11.040  [2024-12-16 11:34:36.897654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:11.040  [2024-12-16 11:34:36.897749] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:13:11.040  [2024-12-16 11:34:36.897762] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:13:11.040  [2024-12-16 11:34:36.898077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:13:11.040  [2024-12-16 11:34:36.898230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:13:11.040  [2024-12-16 11:34:36.898252] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:13:11.040  [2024-12-16 11:34:36.898384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:11.040    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:11.040    11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:11.040    11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.040    11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.040    11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:11.040    "name": "raid_bdev1",
00:13:11.040    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:11.040    "strip_size_kb": 0,
00:13:11.040    "state": "online",
00:13:11.040    "raid_level": "raid1",
00:13:11.040    "superblock": false,
00:13:11.040    "num_base_bdevs": 2,
00:13:11.040    "num_base_bdevs_discovered": 2,
00:13:11.040    "num_base_bdevs_operational": 2,
00:13:11.040    "base_bdevs_list": [
00:13:11.040      {
00:13:11.040        "name": "BaseBdev1",
00:13:11.040        "uuid": "2eb0abca-324c-5524-914e-bee041c4f122",
00:13:11.040        "is_configured": true,
00:13:11.040        "data_offset": 0,
00:13:11.040        "data_size": 65536
00:13:11.040      },
00:13:11.040      {
00:13:11.040        "name": "BaseBdev2",
00:13:11.040        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:11.040        "is_configured": true,
00:13:11.040        "data_offset": 0,
00:13:11.040        "data_size": 65536
00:13:11.040      }
00:13:11.040    ]
00:13:11.040  }'
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:11.040   11:34:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:13:11.609  [2024-12-16 11:34:37.399079] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']'
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.609  [2024-12-16 11:34:37.502571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.609    11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:11.609    "name": "raid_bdev1",
00:13:11.609    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:11.609    "strip_size_kb": 0,
00:13:11.609    "state": "online",
00:13:11.609    "raid_level": "raid1",
00:13:11.609    "superblock": false,
00:13:11.609    "num_base_bdevs": 2,
00:13:11.609    "num_base_bdevs_discovered": 1,
00:13:11.609    "num_base_bdevs_operational": 1,
00:13:11.609    "base_bdevs_list": [
00:13:11.609      {
00:13:11.609        "name": null,
00:13:11.609        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:11.609        "is_configured": false,
00:13:11.609        "data_offset": 0,
00:13:11.609        "data_size": 65536
00:13:11.609      },
00:13:11.609      {
00:13:11.609        "name": "BaseBdev2",
00:13:11.609        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:11.609        "is_configured": true,
00:13:11.609        "data_offset": 0,
00:13:11.609        "data_size": 65536
00:13:11.609      }
00:13:11.609    ]
00:13:11.609  }'
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:11.609   11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:11.609  [2024-12-16 11:34:37.608517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:13:11.609  I/O size of 3145728 is greater than zero copy threshold (65536).
00:13:11.609  Zero copy mechanism will not be used.
00:13:11.609  Running I/O for 60 seconds...
00:13:12.178   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:12.178   11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.178   11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:12.178  [2024-12-16 11:34:37.969800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:12.178   11:34:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.178   11:34:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1
00:13:12.178  [2024-12-16 11:34:38.020806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:13:12.178  [2024-12-16 11:34:38.023070] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:12.178  [2024-12-16 11:34:38.131277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:13:12.178  [2024-12-16 11:34:38.131930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:13:12.438  [2024-12-16 11:34:38.341707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:12.438  [2024-12-16 11:34:38.342149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:12.697        169.00 IOPS,   507.00 MiB/s
[2024-12-16T11:34:38.764Z] [2024-12-16 11:34:38.681962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:13:12.957  [2024-12-16 11:34:38.898852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:13:12.957   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:12.957   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:12.957   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:12.957   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:12.957   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:12.957    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:12.957    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:12.957    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.957    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:13.216    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.216   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:13.216    "name": "raid_bdev1",
00:13:13.216    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:13.216    "strip_size_kb": 0,
00:13:13.216    "state": "online",
00:13:13.216    "raid_level": "raid1",
00:13:13.216    "superblock": false,
00:13:13.216    "num_base_bdevs": 2,
00:13:13.216    "num_base_bdevs_discovered": 2,
00:13:13.216    "num_base_bdevs_operational": 2,
00:13:13.216    "process": {
00:13:13.216      "type": "rebuild",
00:13:13.216      "target": "spare",
00:13:13.216      "progress": {
00:13:13.216        "blocks": 10240,
00:13:13.216        "percent": 15
00:13:13.216      }
00:13:13.216    },
00:13:13.216    "base_bdevs_list": [
00:13:13.216      {
00:13:13.216        "name": "spare",
00:13:13.216        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:13.217        "is_configured": true,
00:13:13.217        "data_offset": 0,
00:13:13.217        "data_size": 65536
00:13:13.217      },
00:13:13.217      {
00:13:13.217        "name": "BaseBdev2",
00:13:13.217        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:13.217        "is_configured": true,
00:13:13.217        "data_offset": 0,
00:13:13.217        "data_size": 65536
00:13:13.217      }
00:13:13.217    ]
00:13:13.217  }'
00:13:13.217    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:13.217   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:13.217    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:13.217   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:13.217   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:13:13.217   11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.217   11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:13.217  [2024-12-16 11:34:39.155794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:13.477  [2024-12-16 11:34:39.337373] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:13:13.477  [2024-12-16 11:34:39.346689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:13.477  [2024-12-16 11:34:39.346745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:13.477  [2024-12-16 11:34:39.346759] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:13:13.477  [2024-12-16 11:34:39.366299] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:13.477    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:13.477    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:13.477    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.477    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:13.477    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:13.477    "name": "raid_bdev1",
00:13:13.477    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:13.477    "strip_size_kb": 0,
00:13:13.477    "state": "online",
00:13:13.477    "raid_level": "raid1",
00:13:13.477    "superblock": false,
00:13:13.477    "num_base_bdevs": 2,
00:13:13.477    "num_base_bdevs_discovered": 1,
00:13:13.477    "num_base_bdevs_operational": 1,
00:13:13.477    "base_bdevs_list": [
00:13:13.477      {
00:13:13.477        "name": null,
00:13:13.477        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:13.477        "is_configured": false,
00:13:13.477        "data_offset": 0,
00:13:13.477        "data_size": 65536
00:13:13.477      },
00:13:13.477      {
00:13:13.477        "name": "BaseBdev2",
00:13:13.477        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:13.477        "is_configured": true,
00:13:13.477        "data_offset": 0,
00:13:13.477        "data_size": 65536
00:13:13.477      }
00:13:13.477    ]
00:13:13.477  }'
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:13.477   11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:13.997        146.50 IOPS,   439.50 MiB/s
[2024-12-16T11:34:40.064Z]  11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:13.997    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:13.997    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:13.997    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.997    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:13.997    11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:13.997    "name": "raid_bdev1",
00:13:13.997    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:13.997    "strip_size_kb": 0,
00:13:13.997    "state": "online",
00:13:13.997    "raid_level": "raid1",
00:13:13.997    "superblock": false,
00:13:13.997    "num_base_bdevs": 2,
00:13:13.997    "num_base_bdevs_discovered": 1,
00:13:13.997    "num_base_bdevs_operational": 1,
00:13:13.997    "base_bdevs_list": [
00:13:13.997      {
00:13:13.997        "name": null,
00:13:13.997        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:13.997        "is_configured": false,
00:13:13.997        "data_offset": 0,
00:13:13.997        "data_size": 65536
00:13:13.997      },
00:13:13.997      {
00:13:13.997        "name": "BaseBdev2",
00:13:13.997        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:13.997        "is_configured": true,
00:13:13.997        "data_offset": 0,
00:13:13.997        "data_size": 65536
00:13:13.997      }
00:13:13.997    ]
00:13:13.997  }'
00:13:13.997    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:13.997    11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.997   11:34:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:13.997  [2024-12-16 11:34:39.995050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:13.997   11:34:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.997   11:34:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1
00:13:13.997  [2024-12-16 11:34:40.050149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:13:13.997  [2024-12-16 11:34:40.052289] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:14.257  [2024-12-16 11:34:40.178309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:13:14.257  [2024-12-16 11:34:40.178852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:13:14.257  [2024-12-16 11:34:40.299702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:14.257  [2024-12-16 11:34:40.300112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:14.834        157.00 IOPS,   471.00 MiB/s
[2024-12-16T11:34:40.901Z] [2024-12-16 11:34:40.652060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:13:14.834  [2024-12-16 11:34:40.652481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:13:14.834  [2024-12-16 11:34:40.861267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:13:14.834  [2024-12-16 11:34:40.861590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:13:15.100   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:15.100   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:15.100   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:15.100   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:15.100   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:15.100    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:15.100    11:34:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:15.100    11:34:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:15.100    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:15.100    11:34:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:15.100   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:15.100    "name": "raid_bdev1",
00:13:15.100    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:15.100    "strip_size_kb": 0,
00:13:15.100    "state": "online",
00:13:15.100    "raid_level": "raid1",
00:13:15.100    "superblock": false,
00:13:15.100    "num_base_bdevs": 2,
00:13:15.100    "num_base_bdevs_discovered": 2,
00:13:15.100    "num_base_bdevs_operational": 2,
00:13:15.100    "process": {
00:13:15.100      "type": "rebuild",
00:13:15.100      "target": "spare",
00:13:15.100      "progress": {
00:13:15.100        "blocks": 10240,
00:13:15.100        "percent": 15
00:13:15.100      }
00:13:15.100    },
00:13:15.100    "base_bdevs_list": [
00:13:15.100      {
00:13:15.100        "name": "spare",
00:13:15.100        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:15.100        "is_configured": true,
00:13:15.100        "data_offset": 0,
00:13:15.100        "data_size": 65536
00:13:15.100      },
00:13:15.100      {
00:13:15.100        "name": "BaseBdev2",
00:13:15.100        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:15.100        "is_configured": true,
00:13:15.100        "data_offset": 0,
00:13:15.100        "data_size": 65536
00:13:15.100      }
00:13:15.100    ]
00:13:15.100  }'
00:13:15.100    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:15.100   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:15.100    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']'
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']'
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=334
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:15.359    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:15.359    11:34:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:15.359    11:34:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:15.359    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:15.359    11:34:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:15.359  [2024-12-16 11:34:41.207426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:13:15.359  [2024-12-16 11:34:41.214064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:15.359    "name": "raid_bdev1",
00:13:15.359    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:15.359    "strip_size_kb": 0,
00:13:15.359    "state": "online",
00:13:15.359    "raid_level": "raid1",
00:13:15.359    "superblock": false,
00:13:15.359    "num_base_bdevs": 2,
00:13:15.359    "num_base_bdevs_discovered": 2,
00:13:15.359    "num_base_bdevs_operational": 2,
00:13:15.359    "process": {
00:13:15.359      "type": "rebuild",
00:13:15.359      "target": "spare",
00:13:15.359      "progress": {
00:13:15.359        "blocks": 12288,
00:13:15.359        "percent": 18
00:13:15.359      }
00:13:15.359    },
00:13:15.359    "base_bdevs_list": [
00:13:15.359      {
00:13:15.359        "name": "spare",
00:13:15.359        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:15.359        "is_configured": true,
00:13:15.359        "data_offset": 0,
00:13:15.359        "data_size": 65536
00:13:15.359      },
00:13:15.359      {
00:13:15.359        "name": "BaseBdev2",
00:13:15.359        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:15.359        "is_configured": true,
00:13:15.359        "data_offset": 0,
00:13:15.359        "data_size": 65536
00:13:15.359      }
00:13:15.359    ]
00:13:15.359  }'
00:13:15.359    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:15.359    11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:15.359   11:34:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:15.619  [2024-12-16 11:34:41.436303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:13:15.619  [2024-12-16 11:34:41.436706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:13:15.879        138.25 IOPS,   414.75 MiB/s
[2024-12-16T11:34:41.946Z] [2024-12-16 11:34:41.825723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:13:15.879  [2024-12-16 11:34:41.826303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:13:16.139  [2024-12-16 11:34:41.961159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:13:16.398  [2024-12-16 11:34:42.294736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:13:16.398  [2024-12-16 11:34:42.295410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:16.398    11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:16.398    11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:16.398    11:34:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:16.398    11:34:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:16.398    11:34:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:16.398    "name": "raid_bdev1",
00:13:16.398    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:16.398    "strip_size_kb": 0,
00:13:16.398    "state": "online",
00:13:16.398    "raid_level": "raid1",
00:13:16.398    "superblock": false,
00:13:16.398    "num_base_bdevs": 2,
00:13:16.398    "num_base_bdevs_discovered": 2,
00:13:16.398    "num_base_bdevs_operational": 2,
00:13:16.398    "process": {
00:13:16.398      "type": "rebuild",
00:13:16.398      "target": "spare",
00:13:16.398      "progress": {
00:13:16.398        "blocks": 26624,
00:13:16.398        "percent": 40
00:13:16.398      }
00:13:16.398    },
00:13:16.398    "base_bdevs_list": [
00:13:16.398      {
00:13:16.398        "name": "spare",
00:13:16.398        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:16.398        "is_configured": true,
00:13:16.398        "data_offset": 0,
00:13:16.398        "data_size": 65536
00:13:16.398      },
00:13:16.398      {
00:13:16.398        "name": "BaseBdev2",
00:13:16.398        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:16.398        "is_configured": true,
00:13:16.398        "data_offset": 0,
00:13:16.398        "data_size": 65536
00:13:16.398      }
00:13:16.398    ]
00:13:16.398  }'
00:13:16.398    11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:16.398   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:16.398    11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:16.657   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:16.657   11:34:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:16.657        123.20 IOPS,   369.60 MiB/s
[2024-12-16T11:34:42.724Z] [2024-12-16 11:34:42.619014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864
00:13:16.916  [2024-12-16 11:34:42.727479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:13:17.175  [2024-12-16 11:34:43.159789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008
00:13:17.434   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:17.434   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:17.434   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:17.434   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:17.434   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:17.434   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:17.434    11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:17.434    11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:17.434    11:34:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:17.434    11:34:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:17.434    11:34:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:17.693   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:17.693    "name": "raid_bdev1",
00:13:17.693    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:17.693    "strip_size_kb": 0,
00:13:17.693    "state": "online",
00:13:17.693    "raid_level": "raid1",
00:13:17.693    "superblock": false,
00:13:17.693    "num_base_bdevs": 2,
00:13:17.693    "num_base_bdevs_discovered": 2,
00:13:17.693    "num_base_bdevs_operational": 2,
00:13:17.693    "process": {
00:13:17.693      "type": "rebuild",
00:13:17.693      "target": "spare",
00:13:17.693      "progress": {
00:13:17.693        "blocks": 43008,
00:13:17.693        "percent": 65
00:13:17.693      }
00:13:17.693    },
00:13:17.693    "base_bdevs_list": [
00:13:17.693      {
00:13:17.693        "name": "spare",
00:13:17.693        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:17.693        "is_configured": true,
00:13:17.693        "data_offset": 0,
00:13:17.693        "data_size": 65536
00:13:17.693      },
00:13:17.693      {
00:13:17.693        "name": "BaseBdev2",
00:13:17.693        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:17.693        "is_configured": true,
00:13:17.693        "data_offset": 0,
00:13:17.693        "data_size": 65536
00:13:17.693      }
00:13:17.693    ]
00:13:17.693  }'
00:13:17.693    11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:17.693   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:17.694    11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:17.694        111.50 IOPS,   334.50 MiB/s
[2024-12-16T11:34:43.761Z] [2024-12-16 11:34:43.616433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152
00:13:17.694   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:17.694   11:34:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:18.262  [2024-12-16 11:34:44.059078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296
00:13:18.522  [2024-12-16 11:34:44.516600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440
00:13:18.781        100.86 IOPS,   302.57 MiB/s
[2024-12-16T11:34:44.848Z]  11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:18.781    11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:18.781    11:34:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:18.781    11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:18.781    11:34:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:18.781    11:34:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:18.781    "name": "raid_bdev1",
00:13:18.781    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:18.781    "strip_size_kb": 0,
00:13:18.781    "state": "online",
00:13:18.781    "raid_level": "raid1",
00:13:18.781    "superblock": false,
00:13:18.781    "num_base_bdevs": 2,
00:13:18.781    "num_base_bdevs_discovered": 2,
00:13:18.781    "num_base_bdevs_operational": 2,
00:13:18.781    "process": {
00:13:18.781      "type": "rebuild",
00:13:18.781      "target": "spare",
00:13:18.781      "progress": {
00:13:18.781        "blocks": 61440,
00:13:18.781        "percent": 93
00:13:18.781      }
00:13:18.781    },
00:13:18.781    "base_bdevs_list": [
00:13:18.781      {
00:13:18.781        "name": "spare",
00:13:18.781        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:18.781        "is_configured": true,
00:13:18.781        "data_offset": 0,
00:13:18.781        "data_size": 65536
00:13:18.781      },
00:13:18.781      {
00:13:18.781        "name": "BaseBdev2",
00:13:18.781        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:18.781        "is_configured": true,
00:13:18.781        "data_offset": 0,
00:13:18.781        "data_size": 65536
00:13:18.781      }
00:13:18.781    ]
00:13:18.781  }'
00:13:18.781    11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:18.781    11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:18.781   11:34:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:18.781  [2024-12-16 11:34:44.846276] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:13:19.041  [2024-12-16 11:34:44.946121] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:13:19.041  [2024-12-16 11:34:44.954578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:19.868         93.75 IOPS,   281.25 MiB/s
[2024-12-16T11:34:45.935Z]  11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:19.868   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:19.868   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:19.868   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:19.868   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:19.868   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:19.868    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:19.868    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:19.868    11:34:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:19.868    11:34:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:19.868    11:34:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:19.868   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:19.868    "name": "raid_bdev1",
00:13:19.868    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:19.868    "strip_size_kb": 0,
00:13:19.868    "state": "online",
00:13:19.868    "raid_level": "raid1",
00:13:19.868    "superblock": false,
00:13:19.868    "num_base_bdevs": 2,
00:13:19.868    "num_base_bdevs_discovered": 2,
00:13:19.868    "num_base_bdevs_operational": 2,
00:13:19.868    "base_bdevs_list": [
00:13:19.868      {
00:13:19.868        "name": "spare",
00:13:19.868        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:19.868        "is_configured": true,
00:13:19.868        "data_offset": 0,
00:13:19.868        "data_size": 65536
00:13:19.868      },
00:13:19.868      {
00:13:19.868        "name": "BaseBdev2",
00:13:19.868        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:19.868        "is_configured": true,
00:13:19.868        "data_offset": 0,
00:13:19.868        "data_size": 65536
00:13:19.868      }
00:13:19.868    ]
00:13:19.868  }'
00:13:19.868    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:19.868   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:13:19.868    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:20.128    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:20.128    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:20.128    11:34:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:20.128    11:34:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:20.128    11:34:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:20.128   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:20.128    "name": "raid_bdev1",
00:13:20.128    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:20.128    "strip_size_kb": 0,
00:13:20.128    "state": "online",
00:13:20.128    "raid_level": "raid1",
00:13:20.128    "superblock": false,
00:13:20.128    "num_base_bdevs": 2,
00:13:20.128    "num_base_bdevs_discovered": 2,
00:13:20.128    "num_base_bdevs_operational": 2,
00:13:20.128    "base_bdevs_list": [
00:13:20.128      {
00:13:20.128        "name": "spare",
00:13:20.128        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:20.128        "is_configured": true,
00:13:20.128        "data_offset": 0,
00:13:20.128        "data_size": 65536
00:13:20.128      },
00:13:20.128      {
00:13:20.128        "name": "BaseBdev2",
00:13:20.128        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:20.128        "is_configured": true,
00:13:20.128        "data_offset": 0,
00:13:20.128        "data_size": 65536
00:13:20.129      }
00:13:20.129    ]
00:13:20.129  }'
00:13:20.129    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:20.129   11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:20.129    11:34:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:20.129    11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:20.129    11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:20.129    11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:20.129    11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:20.129    11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:20.129    "name": "raid_bdev1",
00:13:20.129    "uuid": "d9fe4b52-448c-47fa-80b2-cd3326585d3b",
00:13:20.129    "strip_size_kb": 0,
00:13:20.129    "state": "online",
00:13:20.129    "raid_level": "raid1",
00:13:20.129    "superblock": false,
00:13:20.129    "num_base_bdevs": 2,
00:13:20.129    "num_base_bdevs_discovered": 2,
00:13:20.129    "num_base_bdevs_operational": 2,
00:13:20.129    "base_bdevs_list": [
00:13:20.129      {
00:13:20.129        "name": "spare",
00:13:20.129        "uuid": "ada45bb1-4858-56ea-b925-b2c7b0b8ae5f",
00:13:20.129        "is_configured": true,
00:13:20.129        "data_offset": 0,
00:13:20.129        "data_size": 65536
00:13:20.129      },
00:13:20.129      {
00:13:20.129        "name": "BaseBdev2",
00:13:20.129        "uuid": "80fd2965-d092-5348-859d-3432c3bdd2e1",
00:13:20.129        "is_configured": true,
00:13:20.129        "data_offset": 0,
00:13:20.129        "data_size": 65536
00:13:20.129      }
00:13:20.129    ]
00:13:20.129  }'
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:20.129   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:20.742   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:13:20.742   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:20.742   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:20.742  [2024-12-16 11:34:46.485237] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:13:20.742  [2024-12-16 11:34:46.485273] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:20.742  
00:13:20.742                                                                                                  Latency(us)
00:13:20.742  
[2024-12-16T11:34:46.809Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:20.742  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:13:20.742  	 raid_bdev1          :       8.98      86.79     260.38       0.00     0.00   15784.31     289.76  114931.26
00:13:20.742  
[2024-12-16T11:34:46.809Z]  ===================================================================================================================
00:13:20.742  
[2024-12-16T11:34:46.809Z]  Total                       :                 86.79     260.38       0.00     0.00   15784.31     289.76  114931.26
00:13:20.742  [2024-12-16 11:34:46.572803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:20.742  [2024-12-16 11:34:46.572849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:20.742  [2024-12-16 11:34:46.572926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:20.742  [2024-12-16 11:34:46.572936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:13:20.742  {
00:13:20.742    "results": [
00:13:20.742      {
00:13:20.743        "job": "raid_bdev1",
00:13:20.743        "core_mask": "0x1",
00:13:20.743        "workload": "randrw",
00:13:20.743        "percentage": 50,
00:13:20.743        "status": "finished",
00:13:20.743        "queue_depth": 2,
00:13:20.743        "io_size": 3145728,
00:13:20.743        "runtime": 8.975424,
00:13:20.743        "iops": 86.7925570981382,
00:13:20.743        "mibps": 260.3776712944146,
00:13:20.743        "io_failed": 0,
00:13:20.743        "io_timeout": 0,
00:13:20.743        "avg_latency_us": 15784.307540178595,
00:13:20.743        "min_latency_us": 289.7606986899563,
00:13:20.743        "max_latency_us": 114931.2558951965
00:13:20.743      }
00:13:20.743    ],
00:13:20.743    "core_count": 1
00:13:20.743  }
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:20.743    11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:20.743    11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:20.743    11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length
00:13:20.743    11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:20.743    11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']'
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:20.743   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0
00:13:21.002  /dev/nbd0
00:13:21.002    11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:21.002  1+0 records in
00:13:21.002  1+0 records out
00:13:21.002  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397572 s, 10.3 MB/s
00:13:21.002    11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']'
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:21.002   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2')
00:13:21.003   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:21.003   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:13:21.003   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:21.003   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i
00:13:21.003   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:21.003   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:21.003   11:34:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1
00:13:21.262  /dev/nbd1
00:13:21.262    11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:21.262  1+0 records in
00:13:21.262  1+0 records out
00:13:21.262  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353575 s, 11.6 MB/s
00:13:21.262    11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:21.262   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:13:21.523    11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:21.523   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:13:21.782    11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']'
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87483
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87483 ']'
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87483
00:13:21.782    11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:13:21.782    11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87483
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:13:21.782   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:13:21.783   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87483'
00:13:21.783  killing process with pid 87483
00:13:21.783  Received shutdown signal, test time was about 10.149393 seconds
00:13:21.783  
00:13:21.783                                                                                                  Latency(us)
00:13:21.783  
[2024-12-16T11:34:47.850Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:21.783  
[2024-12-16T11:34:47.850Z]  ===================================================================================================================
00:13:21.783  
[2024-12-16T11:34:47.850Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:21.783   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87483
00:13:21.783  [2024-12-16 11:34:47.740707] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:21.783   11:34:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87483
00:13:21.783  [2024-12-16 11:34:47.767070] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0
00:13:22.042  
00:13:22.042  real	0m11.634s
00:13:22.042  user	0m14.961s
00:13:22.042  sys	0m1.462s
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:13:22.042  ************************************
00:13:22.042  END TEST raid_rebuild_test_io
00:13:22.042  ************************************
00:13:22.042   11:34:48 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true
00:13:22.042   11:34:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:13:22.042   11:34:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:13:22.042   11:34:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:13:22.042  ************************************
00:13:22.042  START TEST raid_rebuild_test_sb_io
00:13:22.042  ************************************
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:22.042    11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87861
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87861
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87861 ']'
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100
00:13:22.042  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable
00:13:22.042   11:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:22.302  [2024-12-16 11:34:48.175497] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:13:22.302  I/O size of 3145728 is greater than zero copy threshold (65536).
00:13:22.302  Zero copy mechanism will not be used.
00:13:22.302  [2024-12-16 11:34:48.176063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87861 ]
00:13:22.302  [2024-12-16 11:34:48.337220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:22.561  [2024-12-16 11:34:48.384906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:13:22.561  [2024-12-16 11:34:48.427808] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:22.561  [2024-12-16 11:34:48.427850] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  BaseBdev1_malloc
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  [2024-12-16 11:34:49.046232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:13:23.131  [2024-12-16 11:34:49.046298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:23.131  [2024-12-16 11:34:49.046335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:13:23.131  [2024-12-16 11:34:49.046351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:23.131  [2024-12-16 11:34:49.048665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:23.131  [2024-12-16 11:34:49.048704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:13:23.131  BaseBdev1
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  BaseBdev2_malloc
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  [2024-12-16 11:34:49.085650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:13:23.131  [2024-12-16 11:34:49.085706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:23.131  [2024-12-16 11:34:49.085728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:13:23.131  [2024-12-16 11:34:49.085737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:23.131  [2024-12-16 11:34:49.087980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:23.131  [2024-12-16 11:34:49.088020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:13:23.131  BaseBdev2
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  spare_malloc
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  spare_delay
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  [2024-12-16 11:34:49.126232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:23.131  [2024-12-16 11:34:49.126287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:23.131  [2024-12-16 11:34:49.126310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:13:23.131  [2024-12-16 11:34:49.126318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:23.131  [2024-12-16 11:34:49.128435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:23.131  [2024-12-16 11:34:49.128473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:23.131  spare
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.131  [2024-12-16 11:34:49.138250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:23.131  [2024-12-16 11:34:49.140122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:23.131  [2024-12-16 11:34:49.140274] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:13:23.131  [2024-12-16 11:34:49.140287] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:13:23.131  [2024-12-16 11:34:49.140522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:13:23.131  [2024-12-16 11:34:49.140665] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:13:23.131  [2024-12-16 11:34:49.140677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:13:23.131  [2024-12-16 11:34:49.140810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:13:23.131   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:23.132   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:23.132   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:23.132   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:23.132    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:23.132    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.132    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.132    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:23.132    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.132   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:23.132    "name": "raid_bdev1",
00:13:23.132    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:23.132    "strip_size_kb": 0,
00:13:23.132    "state": "online",
00:13:23.132    "raid_level": "raid1",
00:13:23.132    "superblock": true,
00:13:23.132    "num_base_bdevs": 2,
00:13:23.132    "num_base_bdevs_discovered": 2,
00:13:23.132    "num_base_bdevs_operational": 2,
00:13:23.132    "base_bdevs_list": [
00:13:23.132      {
00:13:23.132        "name": "BaseBdev1",
00:13:23.132        "uuid": "eb2183a6-1db2-5aef-8133-b9ee8fe20d4e",
00:13:23.132        "is_configured": true,
00:13:23.132        "data_offset": 2048,
00:13:23.132        "data_size": 63488
00:13:23.132      },
00:13:23.132      {
00:13:23.132        "name": "BaseBdev2",
00:13:23.132        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:23.132        "is_configured": true,
00:13:23.132        "data_offset": 2048,
00:13:23.132        "data_size": 63488
00:13:23.132      }
00:13:23.132    ]
00:13:23.132  }'
00:13:23.132   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:23.391   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.650  [2024-12-16 11:34:49.585918] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']'
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:23.650  [2024-12-16 11:34:49.685326] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:23.650   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.650    11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:23.910   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:23.910    "name": "raid_bdev1",
00:13:23.910    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:23.910    "strip_size_kb": 0,
00:13:23.910    "state": "online",
00:13:23.910    "raid_level": "raid1",
00:13:23.910    "superblock": true,
00:13:23.910    "num_base_bdevs": 2,
00:13:23.910    "num_base_bdevs_discovered": 1,
00:13:23.910    "num_base_bdevs_operational": 1,
00:13:23.910    "base_bdevs_list": [
00:13:23.910      {
00:13:23.910        "name": null,
00:13:23.910        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:23.910        "is_configured": false,
00:13:23.910        "data_offset": 0,
00:13:23.910        "data_size": 63488
00:13:23.910      },
00:13:23.910      {
00:13:23.910        "name": "BaseBdev2",
00:13:23.910        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:23.910        "is_configured": true,
00:13:23.910        "data_offset": 2048,
00:13:23.910        "data_size": 63488
00:13:23.910      }
00:13:23.910    ]
00:13:23.910  }'
00:13:23.910   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:23.910   11:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:23.910  [2024-12-16 11:34:49.791160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:13:23.910  I/O size of 3145728 is greater than zero copy threshold (65536).
00:13:23.910  Zero copy mechanism will not be used.
00:13:23.910  Running I/O for 60 seconds...
00:13:24.169   11:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:24.169   11:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:24.169   11:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:24.169  [2024-12-16 11:34:50.161897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:24.169   11:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:24.169   11:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1
00:13:24.169  [2024-12-16 11:34:50.199407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:13:24.169  [2024-12-16 11:34:50.201569] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:24.427  [2024-12-16 11:34:50.302130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:13:24.427  [2024-12-16 11:34:50.302502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:13:24.427  [2024-12-16 11:34:50.458019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:24.427  [2024-12-16 11:34:50.458389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:24.994  [2024-12-16 11:34:50.779285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:13:24.994  [2024-12-16 11:34:50.779763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:13:24.994        184.00 IOPS,   552.00 MiB/s
[2024-12-16T11:34:51.061Z] [2024-12-16 11:34:50.981364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:13:24.994  [2024-12-16 11:34:50.981632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:13:25.252   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:25.252   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:25.252   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:25.252   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:25.252   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:25.252    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:25.252    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:25.252    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:25.252    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:25.252    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:25.252  [2024-12-16 11:34:51.214800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:13:25.252  [2024-12-16 11:34:51.215333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:13:25.252   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:25.252    "name": "raid_bdev1",
00:13:25.252    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:25.252    "strip_size_kb": 0,
00:13:25.252    "state": "online",
00:13:25.252    "raid_level": "raid1",
00:13:25.252    "superblock": true,
00:13:25.252    "num_base_bdevs": 2,
00:13:25.252    "num_base_bdevs_discovered": 2,
00:13:25.252    "num_base_bdevs_operational": 2,
00:13:25.252    "process": {
00:13:25.252      "type": "rebuild",
00:13:25.252      "target": "spare",
00:13:25.252      "progress": {
00:13:25.252        "blocks": 12288,
00:13:25.252        "percent": 19
00:13:25.252      }
00:13:25.252    },
00:13:25.252    "base_bdevs_list": [
00:13:25.252      {
00:13:25.252        "name": "spare",
00:13:25.252        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:25.252        "is_configured": true,
00:13:25.252        "data_offset": 2048,
00:13:25.252        "data_size": 63488
00:13:25.252      },
00:13:25.252      {
00:13:25.252        "name": "BaseBdev2",
00:13:25.252        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:25.252        "is_configured": true,
00:13:25.252        "data_offset": 2048,
00:13:25.252        "data_size": 63488
00:13:25.252      }
00:13:25.252    ]
00:13:25.252  }'
00:13:25.252    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:25.252   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:25.252    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:25.510  [2024-12-16 11:34:51.333100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:25.510  [2024-12-16 11:34:51.418675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:13:25.510  [2024-12-16 11:34:51.525917] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:13:25.510  [2024-12-16 11:34:51.536043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:25.510  [2024-12-16 11:34:51.536126] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:25.510  [2024-12-16 11:34:51.536158] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:13:25.510  [2024-12-16 11:34:51.557827] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:25.510   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:25.769    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:25.769    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:25.769    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:25.769    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:25.769    11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:25.769    "name": "raid_bdev1",
00:13:25.769    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:25.769    "strip_size_kb": 0,
00:13:25.769    "state": "online",
00:13:25.769    "raid_level": "raid1",
00:13:25.769    "superblock": true,
00:13:25.769    "num_base_bdevs": 2,
00:13:25.769    "num_base_bdevs_discovered": 1,
00:13:25.769    "num_base_bdevs_operational": 1,
00:13:25.769    "base_bdevs_list": [
00:13:25.769      {
00:13:25.769        "name": null,
00:13:25.769        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:25.769        "is_configured": false,
00:13:25.769        "data_offset": 0,
00:13:25.769        "data_size": 63488
00:13:25.769      },
00:13:25.769      {
00:13:25.769        "name": "BaseBdev2",
00:13:25.769        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:25.769        "is_configured": true,
00:13:25.769        "data_offset": 2048,
00:13:25.769        "data_size": 63488
00:13:25.769      }
00:13:25.769    ]
00:13:25.769  }'
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:25.769   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:26.027        146.50 IOPS,   439.50 MiB/s
[2024-12-16T11:34:52.094Z]  11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:26.027   11:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:26.027   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:26.027   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:26.027   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:26.027    11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:26.027    11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:26.027    11:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:26.027    11:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:26.027    11:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:26.027   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:26.027    "name": "raid_bdev1",
00:13:26.027    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:26.027    "strip_size_kb": 0,
00:13:26.027    "state": "online",
00:13:26.027    "raid_level": "raid1",
00:13:26.027    "superblock": true,
00:13:26.027    "num_base_bdevs": 2,
00:13:26.027    "num_base_bdevs_discovered": 1,
00:13:26.027    "num_base_bdevs_operational": 1,
00:13:26.027    "base_bdevs_list": [
00:13:26.027      {
00:13:26.027        "name": null,
00:13:26.027        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:26.027        "is_configured": false,
00:13:26.027        "data_offset": 0,
00:13:26.027        "data_size": 63488
00:13:26.027      },
00:13:26.027      {
00:13:26.027        "name": "BaseBdev2",
00:13:26.027        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:26.027        "is_configured": true,
00:13:26.027        "data_offset": 2048,
00:13:26.027        "data_size": 63488
00:13:26.027      }
00:13:26.027    ]
00:13:26.027  }'
00:13:26.027    11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:26.286   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:26.286    11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:26.286   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:26.286   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:26.286   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:26.286   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:26.286  [2024-12-16 11:34:52.138588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:26.286   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:26.286   11:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1
00:13:26.286  [2024-12-16 11:34:52.189018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:13:26.286  [2024-12-16 11:34:52.190985] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:26.286  [2024-12-16 11:34:52.315367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:13:26.544  [2024-12-16 11:34:52.444651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:26.544  [2024-12-16 11:34:52.445008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:13:26.811  [2024-12-16 11:34:52.764163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:13:27.082        156.33 IOPS,   469.00 MiB/s
[2024-12-16T11:34:53.149Z] [2024-12-16 11:34:52.883808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:27.340    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:27.340    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:27.340    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:27.340    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:27.340    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:27.340  [2024-12-16 11:34:53.213884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:27.340    "name": "raid_bdev1",
00:13:27.340    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:27.340    "strip_size_kb": 0,
00:13:27.340    "state": "online",
00:13:27.340    "raid_level": "raid1",
00:13:27.340    "superblock": true,
00:13:27.340    "num_base_bdevs": 2,
00:13:27.340    "num_base_bdevs_discovered": 2,
00:13:27.340    "num_base_bdevs_operational": 2,
00:13:27.340    "process": {
00:13:27.340      "type": "rebuild",
00:13:27.340      "target": "spare",
00:13:27.340      "progress": {
00:13:27.340        "blocks": 12288,
00:13:27.340        "percent": 19
00:13:27.340      }
00:13:27.340    },
00:13:27.340    "base_bdevs_list": [
00:13:27.340      {
00:13:27.340        "name": "spare",
00:13:27.340        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:27.340        "is_configured": true,
00:13:27.340        "data_offset": 2048,
00:13:27.340        "data_size": 63488
00:13:27.340      },
00:13:27.340      {
00:13:27.340        "name": "BaseBdev2",
00:13:27.340        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:27.340        "is_configured": true,
00:13:27.340        "data_offset": 2048,
00:13:27.340        "data_size": 63488
00:13:27.340      }
00:13:27.340    ]
00:13:27.340  }'
00:13:27.340    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:27.340    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:13:27.340  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:13:27.340   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']'
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=346
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:27.341    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:27.341    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:27.341    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:27.341    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:27.341    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:27.341   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:27.341    "name": "raid_bdev1",
00:13:27.341    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:27.341    "strip_size_kb": 0,
00:13:27.341    "state": "online",
00:13:27.341    "raid_level": "raid1",
00:13:27.341    "superblock": true,
00:13:27.341    "num_base_bdevs": 2,
00:13:27.341    "num_base_bdevs_discovered": 2,
00:13:27.341    "num_base_bdevs_operational": 2,
00:13:27.341    "process": {
00:13:27.341      "type": "rebuild",
00:13:27.341      "target": "spare",
00:13:27.341      "progress": {
00:13:27.341        "blocks": 14336,
00:13:27.341        "percent": 22
00:13:27.341      }
00:13:27.341    },
00:13:27.341    "base_bdevs_list": [
00:13:27.341      {
00:13:27.341        "name": "spare",
00:13:27.341        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:27.341        "is_configured": true,
00:13:27.341        "data_offset": 2048,
00:13:27.341        "data_size": 63488
00:13:27.341      },
00:13:27.341      {
00:13:27.341        "name": "BaseBdev2",
00:13:27.341        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:27.341        "is_configured": true,
00:13:27.341        "data_offset": 2048,
00:13:27.341        "data_size": 63488
00:13:27.341      }
00:13:27.341    ]
00:13:27.341  }'
00:13:27.341    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:27.599   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:27.599    11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:27.599  [2024-12-16 11:34:53.446747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:13:27.599   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:27.599   11:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:27.857        134.50 IOPS,   403.50 MiB/s
[2024-12-16T11:34:53.924Z] [2024-12-16 11:34:53.920156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:13:28.422  [2024-12-16 11:34:54.260983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:13:28.422  [2024-12-16 11:34:54.474974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:13:28.422   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:28.422   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:28.422   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:28.422   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:28.422   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:28.422   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:28.422    11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:28.422    11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:28.422    11:34:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:28.422    11:34:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:28.682    11:34:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:28.682   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:28.682    "name": "raid_bdev1",
00:13:28.682    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:28.682    "strip_size_kb": 0,
00:13:28.682    "state": "online",
00:13:28.682    "raid_level": "raid1",
00:13:28.682    "superblock": true,
00:13:28.682    "num_base_bdevs": 2,
00:13:28.682    "num_base_bdevs_discovered": 2,
00:13:28.682    "num_base_bdevs_operational": 2,
00:13:28.682    "process": {
00:13:28.682      "type": "rebuild",
00:13:28.682      "target": "spare",
00:13:28.682      "progress": {
00:13:28.682        "blocks": 28672,
00:13:28.682        "percent": 45
00:13:28.682      }
00:13:28.682    },
00:13:28.682    "base_bdevs_list": [
00:13:28.682      {
00:13:28.682        "name": "spare",
00:13:28.682        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:28.682        "is_configured": true,
00:13:28.682        "data_offset": 2048,
00:13:28.682        "data_size": 63488
00:13:28.682      },
00:13:28.682      {
00:13:28.682        "name": "BaseBdev2",
00:13:28.682        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:28.682        "is_configured": true,
00:13:28.682        "data_offset": 2048,
00:13:28.682        "data_size": 63488
00:13:28.682      }
00:13:28.682    ]
00:13:28.682  }'
00:13:28.682    11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:28.682   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:28.682    11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:28.682   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:28.682   11:34:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:28.940        118.00 IOPS,   354.00 MiB/s
[2024-12-16T11:34:55.007Z] [2024-12-16 11:34:54.947898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:13:29.199  [2024-12-16 11:34:55.175920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008
00:13:29.199  [2024-12-16 11:34:55.176347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008
00:13:29.458  [2024-12-16 11:34:55.304757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008
00:13:29.458  [2024-12-16 11:34:55.304952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008
00:13:29.716   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:29.716   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:29.716   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:29.716   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:29.716   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:29.716   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:29.716    11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:29.716    11:34:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:29.716    11:34:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:29.716    11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:29.716    11:34:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:29.716   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:29.716    "name": "raid_bdev1",
00:13:29.716    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:29.716    "strip_size_kb": 0,
00:13:29.716    "state": "online",
00:13:29.716    "raid_level": "raid1",
00:13:29.716    "superblock": true,
00:13:29.716    "num_base_bdevs": 2,
00:13:29.716    "num_base_bdevs_discovered": 2,
00:13:29.716    "num_base_bdevs_operational": 2,
00:13:29.716    "process": {
00:13:29.716      "type": "rebuild",
00:13:29.716      "target": "spare",
00:13:29.716      "progress": {
00:13:29.716        "blocks": 43008,
00:13:29.717        "percent": 67
00:13:29.717      }
00:13:29.717    },
00:13:29.717    "base_bdevs_list": [
00:13:29.717      {
00:13:29.717        "name": "spare",
00:13:29.717        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:29.717        "is_configured": true,
00:13:29.717        "data_offset": 2048,
00:13:29.717        "data_size": 63488
00:13:29.717      },
00:13:29.717      {
00:13:29.717        "name": "BaseBdev2",
00:13:29.717        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:29.717        "is_configured": true,
00:13:29.717        "data_offset": 2048,
00:13:29.717        "data_size": 63488
00:13:29.717      }
00:13:29.717    ]
00:13:29.717  }'
00:13:29.717    11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:29.717   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:29.717    11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:29.717  [2024-12-16 11:34:55.751490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152
00:13:29.717   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:29.717   11:34:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:30.544        108.33 IOPS,   325.00 MiB/s
[2024-12-16T11:34:56.611Z] [2024-12-16 11:34:56.306096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:13:30.803  [2024-12-16 11:34:56.744530] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:13:30.803   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:30.803   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:30.803   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:30.803   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:30.803   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:30.803   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:30.803    11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:30.803    11:34:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:30.803    11:34:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:30.803    11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:30.803    11:34:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:30.803         97.71 IOPS,   293.14 MiB/s
[2024-12-16T11:34:56.870Z]  11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:30.803    "name": "raid_bdev1",
00:13:30.803    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:30.803    "strip_size_kb": 0,
00:13:30.803    "state": "online",
00:13:30.803    "raid_level": "raid1",
00:13:30.803    "superblock": true,
00:13:30.803    "num_base_bdevs": 2,
00:13:30.803    "num_base_bdevs_discovered": 2,
00:13:30.803    "num_base_bdevs_operational": 2,
00:13:30.803    "process": {
00:13:30.803      "type": "rebuild",
00:13:30.803      "target": "spare",
00:13:30.803      "progress": {
00:13:30.803        "blocks": 63488,
00:13:30.803        "percent": 100
00:13:30.803      }
00:13:30.803    },
00:13:30.803    "base_bdevs_list": [
00:13:30.803      {
00:13:30.803        "name": "spare",
00:13:30.803        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:30.803        "is_configured": true,
00:13:30.803        "data_offset": 2048,
00:13:30.803        "data_size": 63488
00:13:30.803      },
00:13:30.803      {
00:13:30.803        "name": "BaseBdev2",
00:13:30.803        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:30.803        "is_configured": true,
00:13:30.803        "data_offset": 2048,
00:13:30.803        "data_size": 63488
00:13:30.803      }
00:13:30.803    ]
00:13:30.803  }'
00:13:30.803    11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:30.803  [2024-12-16 11:34:56.844362] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:13:30.803  [2024-12-16 11:34:56.855733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:31.061   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:31.061    11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:31.061   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:31.061   11:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:31.997         89.75 IOPS,   269.25 MiB/s
[2024-12-16T11:34:58.064Z]  11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:31.997   11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:31.997   11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:31.997   11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:31.997   11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:31.997   11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:31.997    11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:31.997    11:34:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:31.997    11:34:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:31.997    11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:31.997    11:34:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:31.997   11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:31.997    "name": "raid_bdev1",
00:13:31.997    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:31.997    "strip_size_kb": 0,
00:13:31.997    "state": "online",
00:13:31.997    "raid_level": "raid1",
00:13:31.997    "superblock": true,
00:13:31.997    "num_base_bdevs": 2,
00:13:31.997    "num_base_bdevs_discovered": 2,
00:13:31.997    "num_base_bdevs_operational": 2,
00:13:31.997    "base_bdevs_list": [
00:13:31.997      {
00:13:31.997        "name": "spare",
00:13:31.997        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:31.997        "is_configured": true,
00:13:31.997        "data_offset": 2048,
00:13:31.997        "data_size": 63488
00:13:31.997      },
00:13:31.997      {
00:13:31.997        "name": "BaseBdev2",
00:13:31.997        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:31.997        "is_configured": true,
00:13:31.997        "data_offset": 2048,
00:13:31.997        "data_size": 63488
00:13:31.997      }
00:13:31.997    ]
00:13:31.997  }'
00:13:31.997    11:34:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:31.997   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:13:31.997    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:31.997   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:13:31.997   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break
00:13:31.997   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:31.997   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:31.997   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:31.998   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:31.998   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:31.998    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:31.998    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:31.998    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:32.256    "name": "raid_bdev1",
00:13:32.256    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:32.256    "strip_size_kb": 0,
00:13:32.256    "state": "online",
00:13:32.256    "raid_level": "raid1",
00:13:32.256    "superblock": true,
00:13:32.256    "num_base_bdevs": 2,
00:13:32.256    "num_base_bdevs_discovered": 2,
00:13:32.256    "num_base_bdevs_operational": 2,
00:13:32.256    "base_bdevs_list": [
00:13:32.256      {
00:13:32.256        "name": "spare",
00:13:32.256        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:32.256        "is_configured": true,
00:13:32.256        "data_offset": 2048,
00:13:32.256        "data_size": 63488
00:13:32.256      },
00:13:32.256      {
00:13:32.256        "name": "BaseBdev2",
00:13:32.256        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:32.256        "is_configured": true,
00:13:32.256        "data_offset": 2048,
00:13:32.256        "data_size": 63488
00:13:32.256      }
00:13:32.256    ]
00:13:32.256  }'
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:32.256    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:32.256    "name": "raid_bdev1",
00:13:32.256    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:32.256    "strip_size_kb": 0,
00:13:32.256    "state": "online",
00:13:32.256    "raid_level": "raid1",
00:13:32.256    "superblock": true,
00:13:32.256    "num_base_bdevs": 2,
00:13:32.256    "num_base_bdevs_discovered": 2,
00:13:32.256    "num_base_bdevs_operational": 2,
00:13:32.256    "base_bdevs_list": [
00:13:32.256      {
00:13:32.256        "name": "spare",
00:13:32.256        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:32.256        "is_configured": true,
00:13:32.256        "data_offset": 2048,
00:13:32.256        "data_size": 63488
00:13:32.256      },
00:13:32.256      {
00:13:32.256        "name": "BaseBdev2",
00:13:32.256        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:32.256        "is_configured": true,
00:13:32.256        "data_offset": 2048,
00:13:32.256        "data_size": 63488
00:13:32.256      }
00:13:32.256    ]
00:13:32.256  }'
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:32.256   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:32.825  [2024-12-16 11:34:58.666995] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:13:32.825  [2024-12-16 11:34:58.667119] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:32.825  
00:13:32.825                                                                                                  Latency(us)
00:13:32.825  
[2024-12-16T11:34:58.892Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:32.825  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:13:32.825  	 raid_bdev1          :       8.92      83.76     251.27       0.00     0.00   16081.80     282.61  110810.21
00:13:32.825  
[2024-12-16T11:34:58.892Z]  ===================================================================================================================
00:13:32.825  
[2024-12-16T11:34:58.892Z]  Total                       :                 83.76     251.27       0.00     0.00   16081.80     282.61  110810.21
00:13:32.825  [2024-12-16 11:34:58.698798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:32.825  [2024-12-16 11:34:58.698899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:32.825  [2024-12-16 11:34:58.699020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:32.825  [2024-12-16 11:34:58.699099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:13:32.825  {
00:13:32.825    "results": [
00:13:32.825      {
00:13:32.825        "job": "raid_bdev1",
00:13:32.825        "core_mask": "0x1",
00:13:32.825        "workload": "randrw",
00:13:32.825        "percentage": 50,
00:13:32.825        "status": "finished",
00:13:32.825        "queue_depth": 2,
00:13:32.825        "io_size": 3145728,
00:13:32.825        "runtime": 8.918632,
00:13:32.825        "iops": 83.75723989957204,
00:13:32.825        "mibps": 251.27171969871614,
00:13:32.825        "io_failed": 0,
00:13:32.825        "io_timeout": 0,
00:13:32.825        "avg_latency_us": 16081.803779893959,
00:13:32.825        "min_latency_us": 282.6061135371179,
00:13:32.825        "max_latency_us": 110810.21484716157
00:13:32.825      }
00:13:32.825    ],
00:13:32.825    "core_count": 1
00:13:32.825  }
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:32.825    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:32.825    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:32.825    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:32.825    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length
00:13:32.825    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']'
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:32.825   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0
00:13:33.094  /dev/nbd0
00:13:33.094    11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:33.094   11:34:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:33.094  1+0 records in
00:13:33.094  1+0 records out
00:13:33.094  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325205 s, 12.6 MB/s
00:13:33.094    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']'
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2')
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:33.094   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1
00:13:33.354  /dev/nbd1
00:13:33.354    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:33.354  1+0 records in
00:13:33.354  1+0 records out
00:13:33.354  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366798 s, 11.2 MB/s
00:13:33.354    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:33.354   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:13:33.614    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:33.614   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:13:33.873    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:33.873  [2024-12-16 11:34:59.839564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:33.873  [2024-12-16 11:34:59.839625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:33.873  [2024-12-16 11:34:59.839648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:13:33.873  [2024-12-16 11:34:59.839661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:33.873  [2024-12-16 11:34:59.842085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:33.873  [2024-12-16 11:34:59.842133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:33.873  [2024-12-16 11:34:59.842224] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:13:33.873  [2024-12-16 11:34:59.842274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:33.873  [2024-12-16 11:34:59.842395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:33.873  spare
00:13:33.873   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:33.874   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:13:33.874   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:33.874   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:34.133  [2024-12-16 11:34:59.942310] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:13:34.133  [2024-12-16 11:34:59.942398] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:13:34.133  [2024-12-16 11:34:59.942727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30
00:13:34.133  [2024-12-16 11:34:59.942887] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:13:34.133  [2024-12-16 11:34:59.942902] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:13:34.133  [2024-12-16 11:34:59.943078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:34.133    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:34.133    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:34.133    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:34.133    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:34.133    11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:34.133    "name": "raid_bdev1",
00:13:34.133    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:34.133    "strip_size_kb": 0,
00:13:34.133    "state": "online",
00:13:34.133    "raid_level": "raid1",
00:13:34.133    "superblock": true,
00:13:34.133    "num_base_bdevs": 2,
00:13:34.133    "num_base_bdevs_discovered": 2,
00:13:34.133    "num_base_bdevs_operational": 2,
00:13:34.133    "base_bdevs_list": [
00:13:34.133      {
00:13:34.133        "name": "spare",
00:13:34.133        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:34.133        "is_configured": true,
00:13:34.133        "data_offset": 2048,
00:13:34.133        "data_size": 63488
00:13:34.133      },
00:13:34.133      {
00:13:34.133        "name": "BaseBdev2",
00:13:34.133        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:34.133        "is_configured": true,
00:13:34.133        "data_offset": 2048,
00:13:34.133        "data_size": 63488
00:13:34.133      }
00:13:34.133    ]
00:13:34.133  }'
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:34.133   11:34:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:34.393   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:34.393   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:34.393   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:34.393   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:34.393   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:34.393    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:34.393    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:34.393    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:34.393    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:34.393    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:34.393   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:34.393    "name": "raid_bdev1",
00:13:34.393    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:34.393    "strip_size_kb": 0,
00:13:34.393    "state": "online",
00:13:34.393    "raid_level": "raid1",
00:13:34.393    "superblock": true,
00:13:34.393    "num_base_bdevs": 2,
00:13:34.393    "num_base_bdevs_discovered": 2,
00:13:34.393    "num_base_bdevs_operational": 2,
00:13:34.393    "base_bdevs_list": [
00:13:34.393      {
00:13:34.393        "name": "spare",
00:13:34.393        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:34.393        "is_configured": true,
00:13:34.393        "data_offset": 2048,
00:13:34.393        "data_size": 63488
00:13:34.393      },
00:13:34.393      {
00:13:34.393        "name": "BaseBdev2",
00:13:34.393        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:34.393        "is_configured": true,
00:13:34.393        "data_offset": 2048,
00:13:34.393        "data_size": 63488
00:13:34.393      }
00:13:34.393    ]
00:13:34.393  }'
00:13:34.394    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:34.394   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:34.654  [2024-12-16 11:35:00.558652] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:34.654    11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:34.654    "name": "raid_bdev1",
00:13:34.654    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:34.654    "strip_size_kb": 0,
00:13:34.654    "state": "online",
00:13:34.654    "raid_level": "raid1",
00:13:34.654    "superblock": true,
00:13:34.654    "num_base_bdevs": 2,
00:13:34.654    "num_base_bdevs_discovered": 1,
00:13:34.654    "num_base_bdevs_operational": 1,
00:13:34.654    "base_bdevs_list": [
00:13:34.654      {
00:13:34.654        "name": null,
00:13:34.654        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:34.654        "is_configured": false,
00:13:34.654        "data_offset": 0,
00:13:34.654        "data_size": 63488
00:13:34.654      },
00:13:34.654      {
00:13:34.654        "name": "BaseBdev2",
00:13:34.654        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:34.654        "is_configured": true,
00:13:34.654        "data_offset": 2048,
00:13:34.654        "data_size": 63488
00:13:34.654      }
00:13:34.654    ]
00:13:34.654  }'
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:34.654   11:35:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:35.222   11:35:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:35.222   11:35:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:35.222   11:35:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:35.223  [2024-12-16 11:35:01.057926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:35.223  [2024-12-16 11:35:01.058131] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:13:35.223  [2024-12-16 11:35:01.058156] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:13:35.223  [2024-12-16 11:35:01.058205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:35.223  [2024-12-16 11:35:01.062803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000
00:13:35.223   11:35:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:35.223   11:35:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1
00:13:35.223  [2024-12-16 11:35:01.064926] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:36.161    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:36.161    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:36.161    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:36.161    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:36.161    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:36.161    "name": "raid_bdev1",
00:13:36.161    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:36.161    "strip_size_kb": 0,
00:13:36.161    "state": "online",
00:13:36.161    "raid_level": "raid1",
00:13:36.161    "superblock": true,
00:13:36.161    "num_base_bdevs": 2,
00:13:36.161    "num_base_bdevs_discovered": 2,
00:13:36.161    "num_base_bdevs_operational": 2,
00:13:36.161    "process": {
00:13:36.161      "type": "rebuild",
00:13:36.161      "target": "spare",
00:13:36.161      "progress": {
00:13:36.161        "blocks": 20480,
00:13:36.161        "percent": 32
00:13:36.161      }
00:13:36.161    },
00:13:36.161    "base_bdevs_list": [
00:13:36.161      {
00:13:36.161        "name": "spare",
00:13:36.161        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:36.161        "is_configured": true,
00:13:36.161        "data_offset": 2048,
00:13:36.161        "data_size": 63488
00:13:36.161      },
00:13:36.161      {
00:13:36.161        "name": "BaseBdev2",
00:13:36.161        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:36.161        "is_configured": true,
00:13:36.161        "data_offset": 2048,
00:13:36.161        "data_size": 63488
00:13:36.161      }
00:13:36.161    ]
00:13:36.161  }'
00:13:36.161    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:36.161    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:36.161   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:36.421  [2024-12-16 11:35:02.233097] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:36.421  [2024-12-16 11:35:02.270072] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:13:36.421  [2024-12-16 11:35:02.270136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:36.421  [2024-12-16 11:35:02.270155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:36.421  [2024-12-16 11:35:02.270162] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:36.421    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:36.421    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:36.421    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:36.421    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:36.421    11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:36.421    "name": "raid_bdev1",
00:13:36.421    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:36.421    "strip_size_kb": 0,
00:13:36.421    "state": "online",
00:13:36.421    "raid_level": "raid1",
00:13:36.421    "superblock": true,
00:13:36.421    "num_base_bdevs": 2,
00:13:36.421    "num_base_bdevs_discovered": 1,
00:13:36.421    "num_base_bdevs_operational": 1,
00:13:36.421    "base_bdevs_list": [
00:13:36.421      {
00:13:36.421        "name": null,
00:13:36.421        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:36.421        "is_configured": false,
00:13:36.421        "data_offset": 0,
00:13:36.421        "data_size": 63488
00:13:36.421      },
00:13:36.421      {
00:13:36.421        "name": "BaseBdev2",
00:13:36.421        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:36.421        "is_configured": true,
00:13:36.421        "data_offset": 2048,
00:13:36.421        "data_size": 63488
00:13:36.421      }
00:13:36.421    ]
00:13:36.421  }'
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:36.421   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:36.680   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:36.680   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:36.680   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:36.680  [2024-12-16 11:35:02.710140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:36.680  [2024-12-16 11:35:02.710295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:36.680  [2024-12-16 11:35:02.710358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:13:36.680  [2024-12-16 11:35:02.710402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:36.681  [2024-12-16 11:35:02.710929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:36.681  [2024-12-16 11:35:02.710998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:36.681  [2024-12-16 11:35:02.711137] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:13:36.681  [2024-12-16 11:35:02.711182] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:13:36.681  [2024-12-16 11:35:02.711245] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:13:36.681  [2024-12-16 11:35:02.711298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:36.681  [2024-12-16 11:35:02.715899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0
00:13:36.681  spare
00:13:36.681   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:36.681   11:35:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1
00:13:36.681  [2024-12-16 11:35:02.718078] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:38.061    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:38.061    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:38.061    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:38.061    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:38.061    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:38.061    "name": "raid_bdev1",
00:13:38.061    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:38.061    "strip_size_kb": 0,
00:13:38.061    "state": "online",
00:13:38.061    "raid_level": "raid1",
00:13:38.061    "superblock": true,
00:13:38.061    "num_base_bdevs": 2,
00:13:38.061    "num_base_bdevs_discovered": 2,
00:13:38.061    "num_base_bdevs_operational": 2,
00:13:38.061    "process": {
00:13:38.061      "type": "rebuild",
00:13:38.061      "target": "spare",
00:13:38.061      "progress": {
00:13:38.061        "blocks": 20480,
00:13:38.061        "percent": 32
00:13:38.061      }
00:13:38.061    },
00:13:38.061    "base_bdevs_list": [
00:13:38.061      {
00:13:38.061        "name": "spare",
00:13:38.061        "uuid": "ecc509c3-9524-5de3-9790-33a7ba24f147",
00:13:38.061        "is_configured": true,
00:13:38.061        "data_offset": 2048,
00:13:38.061        "data_size": 63488
00:13:38.061      },
00:13:38.061      {
00:13:38.061        "name": "BaseBdev2",
00:13:38.061        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:38.061        "is_configured": true,
00:13:38.061        "data_offset": 2048,
00:13:38.061        "data_size": 63488
00:13:38.061      }
00:13:38.061    ]
00:13:38.061  }'
00:13:38.061    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:38.061    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:38.061  [2024-12-16 11:35:03.878076] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:38.061  [2024-12-16 11:35:03.923073] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:13:38.061  [2024-12-16 11:35:03.923195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:38.061  [2024-12-16 11:35:03.923241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:38.061  [2024-12-16 11:35:03.923283] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:38.061   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:38.062   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:38.062   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:38.062   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:38.062   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:38.062    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:38.062    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:38.062    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:38.062    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:38.062    11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:38.062   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:38.062    "name": "raid_bdev1",
00:13:38.062    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:38.062    "strip_size_kb": 0,
00:13:38.062    "state": "online",
00:13:38.062    "raid_level": "raid1",
00:13:38.062    "superblock": true,
00:13:38.062    "num_base_bdevs": 2,
00:13:38.062    "num_base_bdevs_discovered": 1,
00:13:38.062    "num_base_bdevs_operational": 1,
00:13:38.062    "base_bdevs_list": [
00:13:38.062      {
00:13:38.062        "name": null,
00:13:38.062        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:38.062        "is_configured": false,
00:13:38.062        "data_offset": 0,
00:13:38.062        "data_size": 63488
00:13:38.062      },
00:13:38.062      {
00:13:38.062        "name": "BaseBdev2",
00:13:38.062        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:38.062        "is_configured": true,
00:13:38.062        "data_offset": 2048,
00:13:38.062        "data_size": 63488
00:13:38.062      }
00:13:38.062    ]
00:13:38.062  }'
00:13:38.062   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:38.062   11:35:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:38.322   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:38.322   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:38.322   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:38.322   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:38.322   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:38.322    11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:38.322    11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:38.322    11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:38.322    11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:38.322    11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:38.322   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:38.322    "name": "raid_bdev1",
00:13:38.322    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:38.322    "strip_size_kb": 0,
00:13:38.322    "state": "online",
00:13:38.322    "raid_level": "raid1",
00:13:38.322    "superblock": true,
00:13:38.322    "num_base_bdevs": 2,
00:13:38.322    "num_base_bdevs_discovered": 1,
00:13:38.322    "num_base_bdevs_operational": 1,
00:13:38.322    "base_bdevs_list": [
00:13:38.322      {
00:13:38.322        "name": null,
00:13:38.322        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:38.322        "is_configured": false,
00:13:38.322        "data_offset": 0,
00:13:38.322        "data_size": 63488
00:13:38.322      },
00:13:38.322      {
00:13:38.322        "name": "BaseBdev2",
00:13:38.322        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:38.322        "is_configured": true,
00:13:38.322        "data_offset": 2048,
00:13:38.322        "data_size": 63488
00:13:38.322      }
00:13:38.322    ]
00:13:38.322  }'
00:13:38.322    11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:38.322   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:38.322    11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:38.582  [2024-12-16 11:35:04.439251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:13:38.582  [2024-12-16 11:35:04.439367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:38.582  [2024-12-16 11:35:04.439393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:13:38.582  [2024-12-16 11:35:04.439405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:38.582  [2024-12-16 11:35:04.439861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:38.582  [2024-12-16 11:35:04.439886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:13:38.582  [2024-12-16 11:35:04.439966] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:13:38.582  [2024-12-16 11:35:04.439985] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:13:38.582  [2024-12-16 11:35:04.439993] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:13:38.582  [2024-12-16 11:35:04.440020] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:13:38.582  BaseBdev1
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:38.582   11:35:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:39.580    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:39.580    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:39.580    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:39.580    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:39.580    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:39.580   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:39.580    "name": "raid_bdev1",
00:13:39.580    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:39.581    "strip_size_kb": 0,
00:13:39.581    "state": "online",
00:13:39.581    "raid_level": "raid1",
00:13:39.581    "superblock": true,
00:13:39.581    "num_base_bdevs": 2,
00:13:39.581    "num_base_bdevs_discovered": 1,
00:13:39.581    "num_base_bdevs_operational": 1,
00:13:39.581    "base_bdevs_list": [
00:13:39.581      {
00:13:39.581        "name": null,
00:13:39.581        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:39.581        "is_configured": false,
00:13:39.581        "data_offset": 0,
00:13:39.581        "data_size": 63488
00:13:39.581      },
00:13:39.581      {
00:13:39.581        "name": "BaseBdev2",
00:13:39.581        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:39.581        "is_configured": true,
00:13:39.581        "data_offset": 2048,
00:13:39.581        "data_size": 63488
00:13:39.581      }
00:13:39.581    ]
00:13:39.581  }'
00:13:39.581   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:39.581   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:39.840   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:39.841   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:39.841   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:39.841   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:39.841   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:39.841    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:39.841    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:39.841    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:39.841    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:39.841    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:40.104   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:40.104    "name": "raid_bdev1",
00:13:40.104    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:40.104    "strip_size_kb": 0,
00:13:40.104    "state": "online",
00:13:40.104    "raid_level": "raid1",
00:13:40.104    "superblock": true,
00:13:40.104    "num_base_bdevs": 2,
00:13:40.104    "num_base_bdevs_discovered": 1,
00:13:40.104    "num_base_bdevs_operational": 1,
00:13:40.104    "base_bdevs_list": [
00:13:40.104      {
00:13:40.104        "name": null,
00:13:40.104        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:40.104        "is_configured": false,
00:13:40.104        "data_offset": 0,
00:13:40.104        "data_size": 63488
00:13:40.104      },
00:13:40.104      {
00:13:40.104        "name": "BaseBdev2",
00:13:40.104        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:40.104        "is_configured": true,
00:13:40.104        "data_offset": 2048,
00:13:40.104        "data_size": 63488
00:13:40.104      }
00:13:40.104    ]
00:13:40.104  }'
00:13:40.104    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:40.104   11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:40.104    11:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:40.104    11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:40.104   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:40.104  [2024-12-16 11:35:06.024811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:40.104  [2024-12-16 11:35:06.024997] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:13:40.104  [2024-12-16 11:35:06.025011] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:13:40.104  request:
00:13:40.104  {
00:13:40.104  "base_bdev": "BaseBdev1",
00:13:40.105  "raid_bdev": "raid_bdev1",
00:13:40.105  "method": "bdev_raid_add_base_bdev",
00:13:40.105  "req_id": 1
00:13:40.105  }
00:13:40.105  Got JSON-RPC error response
00:13:40.105  response:
00:13:40.105  {
00:13:40.105  "code": -22,
00:13:40.105  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:13:40.105  }
00:13:40.105   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:13:40.105   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1
00:13:40.105   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:13:40.105   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:13:40.105   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:13:40.105   11:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:41.047    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:41.047    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:41.047    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:41.047    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:41.047    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:41.047   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:41.047    "name": "raid_bdev1",
00:13:41.047    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:41.047    "strip_size_kb": 0,
00:13:41.047    "state": "online",
00:13:41.047    "raid_level": "raid1",
00:13:41.047    "superblock": true,
00:13:41.047    "num_base_bdevs": 2,
00:13:41.047    "num_base_bdevs_discovered": 1,
00:13:41.047    "num_base_bdevs_operational": 1,
00:13:41.047    "base_bdevs_list": [
00:13:41.047      {
00:13:41.047        "name": null,
00:13:41.048        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:41.048        "is_configured": false,
00:13:41.048        "data_offset": 0,
00:13:41.048        "data_size": 63488
00:13:41.048      },
00:13:41.048      {
00:13:41.048        "name": "BaseBdev2",
00:13:41.048        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:41.048        "is_configured": true,
00:13:41.048        "data_offset": 2048,
00:13:41.048        "data_size": 63488
00:13:41.048      }
00:13:41.048    ]
00:13:41.048  }'
00:13:41.048   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:41.048   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:41.617   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:41.617   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:41.617   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:41.617   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:41.617   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:41.617    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:41.617    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:41.617    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:41.617    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:41.617    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:41.618    "name": "raid_bdev1",
00:13:41.618    "uuid": "1269a009-c5fb-4d5b-bb0f-74f8d8270f7e",
00:13:41.618    "strip_size_kb": 0,
00:13:41.618    "state": "online",
00:13:41.618    "raid_level": "raid1",
00:13:41.618    "superblock": true,
00:13:41.618    "num_base_bdevs": 2,
00:13:41.618    "num_base_bdevs_discovered": 1,
00:13:41.618    "num_base_bdevs_operational": 1,
00:13:41.618    "base_bdevs_list": [
00:13:41.618      {
00:13:41.618        "name": null,
00:13:41.618        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:41.618        "is_configured": false,
00:13:41.618        "data_offset": 0,
00:13:41.618        "data_size": 63488
00:13:41.618      },
00:13:41.618      {
00:13:41.618        "name": "BaseBdev2",
00:13:41.618        "uuid": "ddb5f269-78ac-5ca6-b421-034a882f7659",
00:13:41.618        "is_configured": true,
00:13:41.618        "data_offset": 2048,
00:13:41.618        "data_size": 63488
00:13:41.618      }
00:13:41.618    ]
00:13:41.618  }'
00:13:41.618    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:41.618    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87861
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87861 ']'
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87861
00:13:41.618    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:13:41.618    11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87861
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87861'
00:13:41.618  killing process with pid 87861
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87861
00:13:41.618  Received shutdown signal, test time was about 17.830569 seconds
00:13:41.618  
00:13:41.618                                                                                                  Latency(us)
00:13:41.618  
[2024-12-16T11:35:07.685Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:41.618  
[2024-12-16T11:35:07.685Z]  ===================================================================================================================
00:13:41.618  
[2024-12-16T11:35:07.685Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:41.618  [2024-12-16 11:35:07.589705] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:41.618  [2024-12-16 11:35:07.589901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:41.618  [2024-12-16 11:35:07.590011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:41.618   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87861
00:13:41.618  [2024-12-16 11:35:07.590030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:13:41.618  [2024-12-16 11:35:07.616244] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0
00:13:41.878  
00:13:41.878  real	0m19.766s
00:13:41.878  user	0m26.056s
00:13:41.878  sys	0m2.056s
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable
00:13:41.878  ************************************
00:13:41.878  END TEST raid_rebuild_test_sb_io
00:13:41.878  ************************************
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:13:41.878   11:35:07 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4
00:13:41.878   11:35:07 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true
00:13:41.878   11:35:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:13:41.878   11:35:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:13:41.878   11:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:13:41.878  ************************************
00:13:41.878  START TEST raid_rebuild_test
00:13:41.878  ************************************
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:41.878    11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:13:41.878  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']'
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88553
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88553
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88553 ']'
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:13:41.878   11:35:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.138  [2024-12-16 11:35:08.009172] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:13:42.138  I/O size of 3145728 is greater than zero copy threshold (65536).
00:13:42.138  Zero copy mechanism will not be used.
00:13:42.138  [2024-12-16 11:35:08.009393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88553 ]
00:13:42.138  [2024-12-16 11:35:08.166401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:42.397  [2024-12-16 11:35:08.215381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:13:42.397  [2024-12-16 11:35:08.260464] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:42.397  [2024-12-16 11:35:08.260503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.966  BaseBdev1_malloc
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.966  [2024-12-16 11:35:08.851142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:13:42.966  [2024-12-16 11:35:08.851216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:42.966  [2024-12-16 11:35:08.851257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:13:42.966  [2024-12-16 11:35:08.851272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:42.966  [2024-12-16 11:35:08.853483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:42.966  [2024-12-16 11:35:08.853521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:13:42.966  BaseBdev1
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.966  BaseBdev2_malloc
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.966   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  [2024-12-16 11:35:08.889579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:13:42.967  [2024-12-16 11:35:08.889630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:42.967  [2024-12-16 11:35:08.889651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:13:42.967  [2024-12-16 11:35:08.889660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:42.967  [2024-12-16 11:35:08.891753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:42.967  [2024-12-16 11:35:08.891788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:13:42.967  BaseBdev2
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  BaseBdev3_malloc
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  [2024-12-16 11:35:08.918198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:13:42.967  [2024-12-16 11:35:08.918248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:42.967  [2024-12-16 11:35:08.918272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:13:42.967  [2024-12-16 11:35:08.918280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:42.967  [2024-12-16 11:35:08.920393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:42.967  [2024-12-16 11:35:08.920430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:13:42.967  BaseBdev3
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  BaseBdev4_malloc
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  [2024-12-16 11:35:08.946874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:13:42.967  [2024-12-16 11:35:08.946971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:42.967  [2024-12-16 11:35:08.947017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:13:42.967  [2024-12-16 11:35:08.947026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:42.967  [2024-12-16 11:35:08.949186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:42.967  [2024-12-16 11:35:08.949222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:13:42.967  BaseBdev4
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  spare_malloc
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  spare_delay
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  [2024-12-16 11:35:08.987578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:42.967  [2024-12-16 11:35:08.987631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:42.967  [2024-12-16 11:35:08.987653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:13:42.967  [2024-12-16 11:35:08.987661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:42.967  [2024-12-16 11:35:08.989709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:42.967  [2024-12-16 11:35:08.989745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:42.967  spare
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967   11:35:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967  [2024-12-16 11:35:08.999637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:42.967  [2024-12-16 11:35:09.001419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:42.967  [2024-12-16 11:35:09.001489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:13:42.967  [2024-12-16 11:35:09.001531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:13:42.967  [2024-12-16 11:35:09.001618] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:13:42.967  [2024-12-16 11:35:09.001629] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:13:42.967  [2024-12-16 11:35:09.001869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:13:42.967  [2024-12-16 11:35:09.002020] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:13:42.967  [2024-12-16 11:35:09.002039] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:13:42.967  [2024-12-16 11:35:09.002162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:42.967   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:42.967    11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:42.967    11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:42.967    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:42.967    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:42.967    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:43.227   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:43.227    "name": "raid_bdev1",
00:13:43.227    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:43.227    "strip_size_kb": 0,
00:13:43.227    "state": "online",
00:13:43.227    "raid_level": "raid1",
00:13:43.227    "superblock": false,
00:13:43.227    "num_base_bdevs": 4,
00:13:43.227    "num_base_bdevs_discovered": 4,
00:13:43.227    "num_base_bdevs_operational": 4,
00:13:43.227    "base_bdevs_list": [
00:13:43.227      {
00:13:43.227        "name": "BaseBdev1",
00:13:43.227        "uuid": "e5cf25b6-11c7-51a6-8750-ec8d32e89a10",
00:13:43.227        "is_configured": true,
00:13:43.227        "data_offset": 0,
00:13:43.227        "data_size": 65536
00:13:43.227      },
00:13:43.227      {
00:13:43.227        "name": "BaseBdev2",
00:13:43.227        "uuid": "100fe363-7f4d-5d20-9ffd-5834a2e1e73f",
00:13:43.227        "is_configured": true,
00:13:43.227        "data_offset": 0,
00:13:43.227        "data_size": 65536
00:13:43.227      },
00:13:43.227      {
00:13:43.227        "name": "BaseBdev3",
00:13:43.227        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:43.227        "is_configured": true,
00:13:43.227        "data_offset": 0,
00:13:43.227        "data_size": 65536
00:13:43.227      },
00:13:43.227      {
00:13:43.227        "name": "BaseBdev4",
00:13:43.227        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:43.227        "is_configured": true,
00:13:43.227        "data_offset": 0,
00:13:43.227        "data_size": 65536
00:13:43.227      }
00:13:43.227    ]
00:13:43.227  }'
00:13:43.227   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:43.227   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:43.487  [2024-12-16 11:35:09.415259] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:43.487    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:43.487   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:13:43.747  [2024-12-16 11:35:09.662562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:13:43.747  /dev/nbd0
00:13:43.747    11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:43.747  1+0 records in
00:13:43.747  1+0 records out
00:13:43.747  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026272 s, 15.6 MB/s
00:13:43.747    11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']'
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1
00:13:43.747   11:35:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct
00:13:49.027  65536+0 records in
00:13:49.027  65536+0 records out
00:13:49.027  33554432 bytes (34 MB, 32 MiB) copied, 5.20756 s, 6.4 MB/s
00:13:49.027   11:35:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:13:49.027   11:35:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:13:49.027   11:35:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:49.027   11:35:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:49.027   11:35:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:13:49.027   11:35:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:49.027   11:35:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:13:49.286    11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:49.286  [2024-12-16 11:35:15.183316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:49.286  [2024-12-16 11:35:15.195382] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:49.286    11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:49.286    11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:49.286    11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:49.286    11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:49.286    11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:49.286    "name": "raid_bdev1",
00:13:49.286    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:49.286    "strip_size_kb": 0,
00:13:49.286    "state": "online",
00:13:49.286    "raid_level": "raid1",
00:13:49.286    "superblock": false,
00:13:49.286    "num_base_bdevs": 4,
00:13:49.286    "num_base_bdevs_discovered": 3,
00:13:49.286    "num_base_bdevs_operational": 3,
00:13:49.286    "base_bdevs_list": [
00:13:49.286      {
00:13:49.286        "name": null,
00:13:49.286        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:49.286        "is_configured": false,
00:13:49.286        "data_offset": 0,
00:13:49.286        "data_size": 65536
00:13:49.286      },
00:13:49.286      {
00:13:49.286        "name": "BaseBdev2",
00:13:49.286        "uuid": "100fe363-7f4d-5d20-9ffd-5834a2e1e73f",
00:13:49.286        "is_configured": true,
00:13:49.286        "data_offset": 0,
00:13:49.286        "data_size": 65536
00:13:49.286      },
00:13:49.286      {
00:13:49.286        "name": "BaseBdev3",
00:13:49.286        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:49.286        "is_configured": true,
00:13:49.286        "data_offset": 0,
00:13:49.286        "data_size": 65536
00:13:49.286      },
00:13:49.286      {
00:13:49.286        "name": "BaseBdev4",
00:13:49.286        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:49.286        "is_configured": true,
00:13:49.286        "data_offset": 0,
00:13:49.286        "data_size": 65536
00:13:49.286      }
00:13:49.286    ]
00:13:49.286  }'
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:49.286   11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:49.854   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:49.854   11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:49.854   11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:49.854  [2024-12-16 11:35:15.658680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:49.854  [2024-12-16 11:35:15.662150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0
00:13:49.854  [2024-12-16 11:35:15.664073] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:49.854   11:35:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:49.854   11:35:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1
00:13:50.792   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:50.792   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:50.792   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:50.792   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:50.793   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:50.793    11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:50.793    11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:50.793    11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:50.793    11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:50.793    11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:50.793   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:50.793    "name": "raid_bdev1",
00:13:50.793    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:50.793    "strip_size_kb": 0,
00:13:50.793    "state": "online",
00:13:50.793    "raid_level": "raid1",
00:13:50.793    "superblock": false,
00:13:50.793    "num_base_bdevs": 4,
00:13:50.793    "num_base_bdevs_discovered": 4,
00:13:50.793    "num_base_bdevs_operational": 4,
00:13:50.793    "process": {
00:13:50.793      "type": "rebuild",
00:13:50.793      "target": "spare",
00:13:50.793      "progress": {
00:13:50.793        "blocks": 20480,
00:13:50.793        "percent": 31
00:13:50.793      }
00:13:50.793    },
00:13:50.793    "base_bdevs_list": [
00:13:50.793      {
00:13:50.793        "name": "spare",
00:13:50.793        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:50.793        "is_configured": true,
00:13:50.793        "data_offset": 0,
00:13:50.793        "data_size": 65536
00:13:50.793      },
00:13:50.793      {
00:13:50.793        "name": "BaseBdev2",
00:13:50.793        "uuid": "100fe363-7f4d-5d20-9ffd-5834a2e1e73f",
00:13:50.793        "is_configured": true,
00:13:50.793        "data_offset": 0,
00:13:50.793        "data_size": 65536
00:13:50.793      },
00:13:50.793      {
00:13:50.793        "name": "BaseBdev3",
00:13:50.793        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:50.793        "is_configured": true,
00:13:50.793        "data_offset": 0,
00:13:50.793        "data_size": 65536
00:13:50.793      },
00:13:50.793      {
00:13:50.793        "name": "BaseBdev4",
00:13:50.793        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:50.793        "is_configured": true,
00:13:50.793        "data_offset": 0,
00:13:50.793        "data_size": 65536
00:13:50.793      }
00:13:50.793    ]
00:13:50.793  }'
00:13:50.793    11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:50.793   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:50.793    11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:50.793   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:50.793   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:13:50.793   11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:50.793   11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:50.793  [2024-12-16 11:35:16.826985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:51.053  [2024-12-16 11:35:16.868988] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:13:51.053  [2024-12-16 11:35:16.869113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:51.053  [2024-12-16 11:35:16.869135] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:13:51.053  [2024-12-16 11:35:16.869144] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:51.053    11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:51.053    11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:51.053    11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:51.053    11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:51.053    11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:51.053    "name": "raid_bdev1",
00:13:51.053    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:51.053    "strip_size_kb": 0,
00:13:51.053    "state": "online",
00:13:51.053    "raid_level": "raid1",
00:13:51.053    "superblock": false,
00:13:51.053    "num_base_bdevs": 4,
00:13:51.053    "num_base_bdevs_discovered": 3,
00:13:51.053    "num_base_bdevs_operational": 3,
00:13:51.053    "base_bdevs_list": [
00:13:51.053      {
00:13:51.053        "name": null,
00:13:51.053        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:51.053        "is_configured": false,
00:13:51.053        "data_offset": 0,
00:13:51.053        "data_size": 65536
00:13:51.053      },
00:13:51.053      {
00:13:51.053        "name": "BaseBdev2",
00:13:51.053        "uuid": "100fe363-7f4d-5d20-9ffd-5834a2e1e73f",
00:13:51.053        "is_configured": true,
00:13:51.053        "data_offset": 0,
00:13:51.053        "data_size": 65536
00:13:51.053      },
00:13:51.053      {
00:13:51.053        "name": "BaseBdev3",
00:13:51.053        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:51.053        "is_configured": true,
00:13:51.053        "data_offset": 0,
00:13:51.053        "data_size": 65536
00:13:51.053      },
00:13:51.053      {
00:13:51.053        "name": "BaseBdev4",
00:13:51.053        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:51.053        "is_configured": true,
00:13:51.053        "data_offset": 0,
00:13:51.053        "data_size": 65536
00:13:51.053      }
00:13:51.053    ]
00:13:51.053  }'
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:51.053   11:35:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:51.313   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:51.313   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:51.313   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:51.313   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:51.313   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:51.313    11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:51.313    11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:51.313    11:35:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:51.313    11:35:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:51.313    11:35:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:51.313   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:51.313    "name": "raid_bdev1",
00:13:51.313    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:51.313    "strip_size_kb": 0,
00:13:51.313    "state": "online",
00:13:51.313    "raid_level": "raid1",
00:13:51.313    "superblock": false,
00:13:51.313    "num_base_bdevs": 4,
00:13:51.313    "num_base_bdevs_discovered": 3,
00:13:51.313    "num_base_bdevs_operational": 3,
00:13:51.313    "base_bdevs_list": [
00:13:51.313      {
00:13:51.313        "name": null,
00:13:51.313        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:51.313        "is_configured": false,
00:13:51.313        "data_offset": 0,
00:13:51.313        "data_size": 65536
00:13:51.313      },
00:13:51.313      {
00:13:51.313        "name": "BaseBdev2",
00:13:51.313        "uuid": "100fe363-7f4d-5d20-9ffd-5834a2e1e73f",
00:13:51.313        "is_configured": true,
00:13:51.313        "data_offset": 0,
00:13:51.313        "data_size": 65536
00:13:51.313      },
00:13:51.313      {
00:13:51.313        "name": "BaseBdev3",
00:13:51.313        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:51.313        "is_configured": true,
00:13:51.313        "data_offset": 0,
00:13:51.313        "data_size": 65536
00:13:51.313      },
00:13:51.313      {
00:13:51.313        "name": "BaseBdev4",
00:13:51.313        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:51.313        "is_configured": true,
00:13:51.313        "data_offset": 0,
00:13:51.313        "data_size": 65536
00:13:51.313      }
00:13:51.313    ]
00:13:51.313  }'
00:13:51.313    11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:51.573   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:51.573    11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:51.573   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:51.573   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:13:51.573   11:35:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:51.573   11:35:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:51.573  [2024-12-16 11:35:17.464163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:13:51.573  [2024-12-16 11:35:17.467544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0
00:13:51.573  [2024-12-16 11:35:17.469432] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:13:51.573   11:35:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:51.573   11:35:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1
00:13:52.512   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:52.512   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:52.512   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:52.512   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:52.512   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:52.512    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:52.512    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:52.512    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:52.512    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:52.512    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:52.512   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:52.512    "name": "raid_bdev1",
00:13:52.512    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:52.512    "strip_size_kb": 0,
00:13:52.512    "state": "online",
00:13:52.512    "raid_level": "raid1",
00:13:52.512    "superblock": false,
00:13:52.512    "num_base_bdevs": 4,
00:13:52.512    "num_base_bdevs_discovered": 4,
00:13:52.512    "num_base_bdevs_operational": 4,
00:13:52.512    "process": {
00:13:52.512      "type": "rebuild",
00:13:52.512      "target": "spare",
00:13:52.512      "progress": {
00:13:52.512        "blocks": 20480,
00:13:52.512        "percent": 31
00:13:52.512      }
00:13:52.512    },
00:13:52.512    "base_bdevs_list": [
00:13:52.512      {
00:13:52.512        "name": "spare",
00:13:52.512        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:52.512        "is_configured": true,
00:13:52.512        "data_offset": 0,
00:13:52.512        "data_size": 65536
00:13:52.512      },
00:13:52.512      {
00:13:52.512        "name": "BaseBdev2",
00:13:52.512        "uuid": "100fe363-7f4d-5d20-9ffd-5834a2e1e73f",
00:13:52.512        "is_configured": true,
00:13:52.512        "data_offset": 0,
00:13:52.512        "data_size": 65536
00:13:52.512      },
00:13:52.512      {
00:13:52.512        "name": "BaseBdev3",
00:13:52.512        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:52.512        "is_configured": true,
00:13:52.512        "data_offset": 0,
00:13:52.512        "data_size": 65536
00:13:52.512      },
00:13:52.512      {
00:13:52.512        "name": "BaseBdev4",
00:13:52.512        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:52.512        "is_configured": true,
00:13:52.512        "data_offset": 0,
00:13:52.512        "data_size": 65536
00:13:52.512      }
00:13:52.512    ]
00:13:52.512  }'
00:13:52.512    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']'
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']'
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:52.772  [2024-12-16 11:35:18.640420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:13:52.772  [2024-12-16 11:35:18.673831] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]=
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- ))
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:52.772    "name": "raid_bdev1",
00:13:52.772    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:52.772    "strip_size_kb": 0,
00:13:52.772    "state": "online",
00:13:52.772    "raid_level": "raid1",
00:13:52.772    "superblock": false,
00:13:52.772    "num_base_bdevs": 4,
00:13:52.772    "num_base_bdevs_discovered": 3,
00:13:52.772    "num_base_bdevs_operational": 3,
00:13:52.772    "process": {
00:13:52.772      "type": "rebuild",
00:13:52.772      "target": "spare",
00:13:52.772      "progress": {
00:13:52.772        "blocks": 24576,
00:13:52.772        "percent": 37
00:13:52.772      }
00:13:52.772    },
00:13:52.772    "base_bdevs_list": [
00:13:52.772      {
00:13:52.772        "name": "spare",
00:13:52.772        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:52.772        "is_configured": true,
00:13:52.772        "data_offset": 0,
00:13:52.772        "data_size": 65536
00:13:52.772      },
00:13:52.772      {
00:13:52.772        "name": null,
00:13:52.772        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:52.772        "is_configured": false,
00:13:52.772        "data_offset": 0,
00:13:52.772        "data_size": 65536
00:13:52.772      },
00:13:52.772      {
00:13:52.772        "name": "BaseBdev3",
00:13:52.772        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:52.772        "is_configured": true,
00:13:52.772        "data_offset": 0,
00:13:52.772        "data_size": 65536
00:13:52.772      },
00:13:52.772      {
00:13:52.772        "name": "BaseBdev4",
00:13:52.772        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:52.772        "is_configured": true,
00:13:52.772        "data_offset": 0,
00:13:52.772        "data_size": 65536
00:13:52.772      }
00:13:52.772    ]
00:13:52.772  }'
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:52.772    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=371
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:52.772   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:53.032    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:53.032    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:53.032    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:53.032    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:53.032    11:35:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:53.032   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:53.032    "name": "raid_bdev1",
00:13:53.032    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:53.032    "strip_size_kb": 0,
00:13:53.032    "state": "online",
00:13:53.032    "raid_level": "raid1",
00:13:53.032    "superblock": false,
00:13:53.032    "num_base_bdevs": 4,
00:13:53.032    "num_base_bdevs_discovered": 3,
00:13:53.032    "num_base_bdevs_operational": 3,
00:13:53.032    "process": {
00:13:53.032      "type": "rebuild",
00:13:53.032      "target": "spare",
00:13:53.032      "progress": {
00:13:53.032        "blocks": 26624,
00:13:53.032        "percent": 40
00:13:53.032      }
00:13:53.032    },
00:13:53.032    "base_bdevs_list": [
00:13:53.032      {
00:13:53.032        "name": "spare",
00:13:53.032        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:53.032        "is_configured": true,
00:13:53.032        "data_offset": 0,
00:13:53.032        "data_size": 65536
00:13:53.032      },
00:13:53.032      {
00:13:53.032        "name": null,
00:13:53.032        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:53.032        "is_configured": false,
00:13:53.032        "data_offset": 0,
00:13:53.032        "data_size": 65536
00:13:53.032      },
00:13:53.032      {
00:13:53.032        "name": "BaseBdev3",
00:13:53.032        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:53.032        "is_configured": true,
00:13:53.032        "data_offset": 0,
00:13:53.032        "data_size": 65536
00:13:53.032      },
00:13:53.032      {
00:13:53.032        "name": "BaseBdev4",
00:13:53.032        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:53.032        "is_configured": true,
00:13:53.032        "data_offset": 0,
00:13:53.032        "data_size": 65536
00:13:53.032      }
00:13:53.032    ]
00:13:53.032  }'
00:13:53.032    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:53.032   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:53.032    11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:53.032   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:53.032   11:35:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:53.971   11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:53.971   11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:53.971   11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:53.971   11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:53.971   11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:53.971   11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:53.971    11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:53.971    11:35:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:53.971    11:35:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:53.971    11:35:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:53.971    11:35:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:53.971   11:35:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:53.971    "name": "raid_bdev1",
00:13:53.971    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:53.971    "strip_size_kb": 0,
00:13:53.971    "state": "online",
00:13:53.971    "raid_level": "raid1",
00:13:53.971    "superblock": false,
00:13:53.971    "num_base_bdevs": 4,
00:13:53.971    "num_base_bdevs_discovered": 3,
00:13:53.971    "num_base_bdevs_operational": 3,
00:13:53.971    "process": {
00:13:53.971      "type": "rebuild",
00:13:53.971      "target": "spare",
00:13:53.971      "progress": {
00:13:53.971        "blocks": 51200,
00:13:53.971        "percent": 78
00:13:53.971      }
00:13:53.971    },
00:13:53.971    "base_bdevs_list": [
00:13:53.971      {
00:13:53.971        "name": "spare",
00:13:53.971        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:53.971        "is_configured": true,
00:13:53.971        "data_offset": 0,
00:13:53.971        "data_size": 65536
00:13:53.971      },
00:13:53.971      {
00:13:53.971        "name": null,
00:13:53.971        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:53.971        "is_configured": false,
00:13:53.971        "data_offset": 0,
00:13:53.971        "data_size": 65536
00:13:53.971      },
00:13:53.971      {
00:13:53.971        "name": "BaseBdev3",
00:13:53.971        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:53.971        "is_configured": true,
00:13:53.971        "data_offset": 0,
00:13:53.971        "data_size": 65536
00:13:53.971      },
00:13:53.971      {
00:13:53.971        "name": "BaseBdev4",
00:13:53.971        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:53.971        "is_configured": true,
00:13:53.971        "data_offset": 0,
00:13:53.971        "data_size": 65536
00:13:53.971      }
00:13:53.971    ]
00:13:53.971  }'
00:13:53.971    11:35:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:54.231   11:35:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:13:54.231    11:35:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:54.231   11:35:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:13:54.231   11:35:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:13:54.800  [2024-12-16 11:35:20.681786] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:13:54.800  [2024-12-16 11:35:20.681968] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:13:54.800  [2024-12-16 11:35:20.682019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:55.060   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:13:55.060   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:13:55.060   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:55.060   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:13:55.060   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:13:55.060   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:55.320    "name": "raid_bdev1",
00:13:55.320    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:55.320    "strip_size_kb": 0,
00:13:55.320    "state": "online",
00:13:55.320    "raid_level": "raid1",
00:13:55.320    "superblock": false,
00:13:55.320    "num_base_bdevs": 4,
00:13:55.320    "num_base_bdevs_discovered": 3,
00:13:55.320    "num_base_bdevs_operational": 3,
00:13:55.320    "base_bdevs_list": [
00:13:55.320      {
00:13:55.320        "name": "spare",
00:13:55.320        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:55.320        "is_configured": true,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      },
00:13:55.320      {
00:13:55.320        "name": null,
00:13:55.320        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:55.320        "is_configured": false,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      },
00:13:55.320      {
00:13:55.320        "name": "BaseBdev3",
00:13:55.320        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:55.320        "is_configured": true,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      },
00:13:55.320      {
00:13:55.320        "name": "BaseBdev4",
00:13:55.320        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:55.320        "is_configured": true,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      }
00:13:55.320    ]
00:13:55.320  }'
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:13:55.320    "name": "raid_bdev1",
00:13:55.320    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:55.320    "strip_size_kb": 0,
00:13:55.320    "state": "online",
00:13:55.320    "raid_level": "raid1",
00:13:55.320    "superblock": false,
00:13:55.320    "num_base_bdevs": 4,
00:13:55.320    "num_base_bdevs_discovered": 3,
00:13:55.320    "num_base_bdevs_operational": 3,
00:13:55.320    "base_bdevs_list": [
00:13:55.320      {
00:13:55.320        "name": "spare",
00:13:55.320        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:55.320        "is_configured": true,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      },
00:13:55.320      {
00:13:55.320        "name": null,
00:13:55.320        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:55.320        "is_configured": false,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      },
00:13:55.320      {
00:13:55.320        "name": "BaseBdev3",
00:13:55.320        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:55.320        "is_configured": true,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      },
00:13:55.320      {
00:13:55.320        "name": "BaseBdev4",
00:13:55.320        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:55.320        "is_configured": true,
00:13:55.320        "data_offset": 0,
00:13:55.320        "data_size": 65536
00:13:55.320      }
00:13:55.320    ]
00:13:55.320  }'
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:13:55.320   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:13:55.320    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:55.580    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:55.580    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:55.580    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:55.580    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:55.580    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:55.580    "name": "raid_bdev1",
00:13:55.580    "uuid": "c3a54a0e-44f1-4e7f-8a76-5e4e59d5f024",
00:13:55.580    "strip_size_kb": 0,
00:13:55.580    "state": "online",
00:13:55.580    "raid_level": "raid1",
00:13:55.580    "superblock": false,
00:13:55.580    "num_base_bdevs": 4,
00:13:55.580    "num_base_bdevs_discovered": 3,
00:13:55.580    "num_base_bdevs_operational": 3,
00:13:55.580    "base_bdevs_list": [
00:13:55.580      {
00:13:55.580        "name": "spare",
00:13:55.580        "uuid": "a4e8104e-148a-57fe-8431-10ff1dc3fb06",
00:13:55.580        "is_configured": true,
00:13:55.580        "data_offset": 0,
00:13:55.580        "data_size": 65536
00:13:55.580      },
00:13:55.580      {
00:13:55.580        "name": null,
00:13:55.580        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:55.580        "is_configured": false,
00:13:55.580        "data_offset": 0,
00:13:55.580        "data_size": 65536
00:13:55.580      },
00:13:55.580      {
00:13:55.580        "name": "BaseBdev3",
00:13:55.580        "uuid": "f3ef9b2e-0159-5973-9ce1-7aaed9da3b70",
00:13:55.580        "is_configured": true,
00:13:55.580        "data_offset": 0,
00:13:55.580        "data_size": 65536
00:13:55.580      },
00:13:55.580      {
00:13:55.580        "name": "BaseBdev4",
00:13:55.580        "uuid": "42e0e080-5448-5304-997f-976108f3566e",
00:13:55.580        "is_configured": true,
00:13:55.580        "data_offset": 0,
00:13:55.580        "data_size": 65536
00:13:55.580      }
00:13:55.580    ]
00:13:55.580  }'
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:55.580   11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:55.840  [2024-12-16 11:35:21.836254] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:13:55.840  [2024-12-16 11:35:21.836349] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:55.840  [2024-12-16 11:35:21.836466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:55.840  [2024-12-16 11:35:21.836610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:55.840  [2024-12-16 11:35:21.836685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:55.840    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length
00:13:55.840    11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:55.840    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:55.840    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:55.840    11:35:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:55.840   11:35:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:13:56.100  /dev/nbd0
00:13:56.100    11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:56.100  1+0 records in
00:13:56.100  1+0 records out
00:13:56.100  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315374 s, 13.0 MB/s
00:13:56.100    11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:56.100   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:13:56.358  /dev/nbd1
00:13:56.358    11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:56.358  1+0 records in
00:13:56.358  1+0 records out
00:13:56.358  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241613 s, 17.0 MB/s
00:13:56.358    11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:56.358   11:35:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:13:56.616   11:35:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:13:56.616   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:13:56.616   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:56.616   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:56.616   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:13:56.616   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:56.616   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:13:56.876    11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:13:56.876    11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']'
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88553
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88553 ']'
00:13:56.876   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88553
00:13:56.876    11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname
00:13:57.134   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:13:57.134    11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88553
00:13:57.134  killing process with pid 88553
00:13:57.134  Received shutdown signal, test time was about 60.000000 seconds
00:13:57.134  
00:13:57.134                                                                                                  Latency(us)
00:13:57.134  
[2024-12-16T11:35:23.201Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:57.134  
[2024-12-16T11:35:23.201Z]  ===================================================================================================================
00:13:57.134  
[2024-12-16T11:35:23.201Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:13:57.134   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:13:57.134   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:13:57.134   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88553'
00:13:57.134   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88553
00:13:57.134  [2024-12-16 11:35:22.978785] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:57.134   11:35:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88553
00:13:57.134  [2024-12-16 11:35:23.029165] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0
00:13:57.397  
00:13:57.397  real	0m15.367s
00:13:57.397  user	0m17.811s
00:13:57.397  sys	0m2.918s
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:13:57.397  ************************************
00:13:57.397  END TEST raid_rebuild_test
00:13:57.397  ************************************
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:13:57.397   11:35:23 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true
00:13:57.397   11:35:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:13:57.397   11:35:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:13:57.397   11:35:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:13:57.397  ************************************
00:13:57.397  START TEST raid_rebuild_test_sb
00:13:57.397  ************************************
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:13:57.397    11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88978
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88978
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88978 ']'
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:57.397  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:13:57.397   11:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:57.397  I/O size of 3145728 is greater than zero copy threshold (65536).
00:13:57.397  Zero copy mechanism will not be used.
00:13:57.397  [2024-12-16 11:35:23.435199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:13:57.397  [2024-12-16 11:35:23.435355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88978 ]
00:13:57.662  [2024-12-16 11:35:23.597254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:57.662  [2024-12-16 11:35:23.647954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:13:57.662  [2024-12-16 11:35:23.691368] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:57.662  [2024-12-16 11:35:23.691401] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.231  BaseBdev1_malloc
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.231  [2024-12-16 11:35:24.289613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:13:58.231  [2024-12-16 11:35:24.289679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:58.231  [2024-12-16 11:35:24.289707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:13:58.231  [2024-12-16 11:35:24.289722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:58.231  [2024-12-16 11:35:24.291876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:58.231  [2024-12-16 11:35:24.291966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:13:58.231  BaseBdev1
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.231   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  BaseBdev2_malloc
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  [2024-12-16 11:35:24.327785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:13:58.491  [2024-12-16 11:35:24.327904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:58.491  [2024-12-16 11:35:24.327936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:13:58.491  [2024-12-16 11:35:24.327947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:58.491  [2024-12-16 11:35:24.330542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:58.491  [2024-12-16 11:35:24.330589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:13:58.491  BaseBdev2
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  BaseBdev3_malloc
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  [2024-12-16 11:35:24.356307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:13:58.491  [2024-12-16 11:35:24.356397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:58.491  [2024-12-16 11:35:24.356456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:13:58.491  [2024-12-16 11:35:24.356484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:58.491  [2024-12-16 11:35:24.358556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:58.491  [2024-12-16 11:35:24.358622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:13:58.491  BaseBdev3
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  BaseBdev4_malloc
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  [2024-12-16 11:35:24.384870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:13:58.491  [2024-12-16 11:35:24.384982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:58.491  [2024-12-16 11:35:24.385010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:13:58.491  [2024-12-16 11:35:24.385018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:58.491  [2024-12-16 11:35:24.386995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:58.491  [2024-12-16 11:35:24.387030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:13:58.491  BaseBdev4
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  spare_malloc
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  spare_delay
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  [2024-12-16 11:35:24.425329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:13:58.491  [2024-12-16 11:35:24.425385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:58.491  [2024-12-16 11:35:24.425422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:13:58.491  [2024-12-16 11:35:24.425430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:58.491  [2024-12-16 11:35:24.427484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:58.491  [2024-12-16 11:35:24.427522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:13:58.491  spare
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491  [2024-12-16 11:35:24.437393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:58.491  [2024-12-16 11:35:24.439227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:58.491  [2024-12-16 11:35:24.439318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:13:58.491  [2024-12-16 11:35:24.439368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:13:58.491  [2024-12-16 11:35:24.439582] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:13:58.491  [2024-12-16 11:35:24.439613] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:13:58.491  [2024-12-16 11:35:24.439885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:13:58.491  [2024-12-16 11:35:24.440053] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:13:58.491  [2024-12-16 11:35:24.440067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:13:58.491  [2024-12-16 11:35:24.440207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:13:58.491    11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:58.491    11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:58.491    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:58.491    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:58.491    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:58.491   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:13:58.491    "name": "raid_bdev1",
00:13:58.491    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:13:58.491    "strip_size_kb": 0,
00:13:58.491    "state": "online",
00:13:58.491    "raid_level": "raid1",
00:13:58.491    "superblock": true,
00:13:58.491    "num_base_bdevs": 4,
00:13:58.491    "num_base_bdevs_discovered": 4,
00:13:58.491    "num_base_bdevs_operational": 4,
00:13:58.491    "base_bdevs_list": [
00:13:58.491      {
00:13:58.491        "name": "BaseBdev1",
00:13:58.491        "uuid": "09ad84aa-1dd7-52a4-9243-152bcc652881",
00:13:58.492        "is_configured": true,
00:13:58.492        "data_offset": 2048,
00:13:58.492        "data_size": 63488
00:13:58.492      },
00:13:58.492      {
00:13:58.492        "name": "BaseBdev2",
00:13:58.492        "uuid": "072ade99-1b2c-5a1f-bd48-a362be38303b",
00:13:58.492        "is_configured": true,
00:13:58.492        "data_offset": 2048,
00:13:58.492        "data_size": 63488
00:13:58.492      },
00:13:58.492      {
00:13:58.492        "name": "BaseBdev3",
00:13:58.492        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:13:58.492        "is_configured": true,
00:13:58.492        "data_offset": 2048,
00:13:58.492        "data_size": 63488
00:13:58.492      },
00:13:58.492      {
00:13:58.492        "name": "BaseBdev4",
00:13:58.492        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:13:58.492        "is_configured": true,
00:13:58.492        "data_offset": 2048,
00:13:58.492        "data_size": 63488
00:13:58.492      }
00:13:58.492    ]
00:13:58.492  }'
00:13:58.492   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:13:58.492   11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:13:59.058  [2024-12-16 11:35:24.837028] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:13:59.058    11:35:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:59.058   11:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:13:59.058  [2024-12-16 11:35:25.116300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:13:59.316  /dev/nbd0
00:13:59.316    11:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:59.316  1+0 records in
00:13:59.316  1+0 records out
00:13:59.316  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034475 s, 11.9 MB/s
00:13:59.316    11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']'
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1
00:13:59.316   11:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct
00:14:04.591  63488+0 records in
00:14:04.592  63488+0 records out
00:14:04.592  32505856 bytes (33 MB, 31 MiB) copied, 5.11028 s, 6.4 MB/s
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:14:04.592    11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:04.592  [2024-12-16 11:35:30.541381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:04.592  [2024-12-16 11:35:30.559439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:04.592    11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:04.592    11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:04.592    11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:04.592    11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:04.592    11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:04.592    "name": "raid_bdev1",
00:14:04.592    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:04.592    "strip_size_kb": 0,
00:14:04.592    "state": "online",
00:14:04.592    "raid_level": "raid1",
00:14:04.592    "superblock": true,
00:14:04.592    "num_base_bdevs": 4,
00:14:04.592    "num_base_bdevs_discovered": 3,
00:14:04.592    "num_base_bdevs_operational": 3,
00:14:04.592    "base_bdevs_list": [
00:14:04.592      {
00:14:04.592        "name": null,
00:14:04.592        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:04.592        "is_configured": false,
00:14:04.592        "data_offset": 0,
00:14:04.592        "data_size": 63488
00:14:04.592      },
00:14:04.592      {
00:14:04.592        "name": "BaseBdev2",
00:14:04.592        "uuid": "072ade99-1b2c-5a1f-bd48-a362be38303b",
00:14:04.592        "is_configured": true,
00:14:04.592        "data_offset": 2048,
00:14:04.592        "data_size": 63488
00:14:04.592      },
00:14:04.592      {
00:14:04.592        "name": "BaseBdev3",
00:14:04.592        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:04.592        "is_configured": true,
00:14:04.592        "data_offset": 2048,
00:14:04.592        "data_size": 63488
00:14:04.592      },
00:14:04.592      {
00:14:04.592        "name": "BaseBdev4",
00:14:04.592        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:04.592        "is_configured": true,
00:14:04.592        "data_offset": 2048,
00:14:04.592        "data_size": 63488
00:14:04.592      }
00:14:04.592    ]
00:14:04.592  }'
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:04.592   11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:05.160   11:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:05.160   11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:05.160   11:35:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:05.160  [2024-12-16 11:35:31.002759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:05.160  [2024-12-16 11:35:31.006298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360
00:14:05.160  [2024-12-16 11:35:31.008348] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:05.160   11:35:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:05.160   11:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1
00:14:06.131   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:06.131   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:06.131   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:06.131   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:06.131   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:06.131    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:06.131    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:06.131    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:06.131    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:06.131    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:06.131   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:06.131    "name": "raid_bdev1",
00:14:06.131    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:06.131    "strip_size_kb": 0,
00:14:06.131    "state": "online",
00:14:06.131    "raid_level": "raid1",
00:14:06.131    "superblock": true,
00:14:06.131    "num_base_bdevs": 4,
00:14:06.131    "num_base_bdevs_discovered": 4,
00:14:06.131    "num_base_bdevs_operational": 4,
00:14:06.131    "process": {
00:14:06.131      "type": "rebuild",
00:14:06.131      "target": "spare",
00:14:06.132      "progress": {
00:14:06.132        "blocks": 20480,
00:14:06.132        "percent": 32
00:14:06.132      }
00:14:06.132    },
00:14:06.132    "base_bdevs_list": [
00:14:06.132      {
00:14:06.132        "name": "spare",
00:14:06.132        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:06.132        "is_configured": true,
00:14:06.132        "data_offset": 2048,
00:14:06.132        "data_size": 63488
00:14:06.132      },
00:14:06.132      {
00:14:06.132        "name": "BaseBdev2",
00:14:06.132        "uuid": "072ade99-1b2c-5a1f-bd48-a362be38303b",
00:14:06.132        "is_configured": true,
00:14:06.132        "data_offset": 2048,
00:14:06.132        "data_size": 63488
00:14:06.132      },
00:14:06.132      {
00:14:06.132        "name": "BaseBdev3",
00:14:06.132        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:06.132        "is_configured": true,
00:14:06.132        "data_offset": 2048,
00:14:06.132        "data_size": 63488
00:14:06.132      },
00:14:06.132      {
00:14:06.132        "name": "BaseBdev4",
00:14:06.132        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:06.132        "is_configured": true,
00:14:06.132        "data_offset": 2048,
00:14:06.132        "data_size": 63488
00:14:06.132      }
00:14:06.132    ]
00:14:06.132  }'
00:14:06.132    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:06.132   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:06.132    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:06.132   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:06.132   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:14:06.132   11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:06.132   11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:06.132  [2024-12-16 11:35:32.166886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:06.391  [2024-12-16 11:35:32.213626] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:14:06.391  [2024-12-16 11:35:32.213744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:06.391  [2024-12-16 11:35:32.213785] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:06.391  [2024-12-16 11:35:32.213807] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:06.391    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:06.391    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:06.391    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:06.391    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:06.391    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:06.391    "name": "raid_bdev1",
00:14:06.391    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:06.391    "strip_size_kb": 0,
00:14:06.391    "state": "online",
00:14:06.391    "raid_level": "raid1",
00:14:06.391    "superblock": true,
00:14:06.391    "num_base_bdevs": 4,
00:14:06.391    "num_base_bdevs_discovered": 3,
00:14:06.391    "num_base_bdevs_operational": 3,
00:14:06.391    "base_bdevs_list": [
00:14:06.391      {
00:14:06.391        "name": null,
00:14:06.391        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:06.391        "is_configured": false,
00:14:06.391        "data_offset": 0,
00:14:06.391        "data_size": 63488
00:14:06.391      },
00:14:06.391      {
00:14:06.391        "name": "BaseBdev2",
00:14:06.391        "uuid": "072ade99-1b2c-5a1f-bd48-a362be38303b",
00:14:06.391        "is_configured": true,
00:14:06.391        "data_offset": 2048,
00:14:06.391        "data_size": 63488
00:14:06.391      },
00:14:06.391      {
00:14:06.391        "name": "BaseBdev3",
00:14:06.391        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:06.391        "is_configured": true,
00:14:06.391        "data_offset": 2048,
00:14:06.391        "data_size": 63488
00:14:06.391      },
00:14:06.391      {
00:14:06.391        "name": "BaseBdev4",
00:14:06.391        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:06.391        "is_configured": true,
00:14:06.391        "data_offset": 2048,
00:14:06.391        "data_size": 63488
00:14:06.391      }
00:14:06.391    ]
00:14:06.391  }'
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:06.391   11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:06.650   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:06.650   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:06.650   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:06.650   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:06.650   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:06.650    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:06.650    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:06.650    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:06.650    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:06.650    11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:06.908    "name": "raid_bdev1",
00:14:06.908    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:06.908    "strip_size_kb": 0,
00:14:06.908    "state": "online",
00:14:06.908    "raid_level": "raid1",
00:14:06.908    "superblock": true,
00:14:06.908    "num_base_bdevs": 4,
00:14:06.908    "num_base_bdevs_discovered": 3,
00:14:06.908    "num_base_bdevs_operational": 3,
00:14:06.908    "base_bdevs_list": [
00:14:06.908      {
00:14:06.908        "name": null,
00:14:06.908        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:06.908        "is_configured": false,
00:14:06.908        "data_offset": 0,
00:14:06.908        "data_size": 63488
00:14:06.908      },
00:14:06.908      {
00:14:06.908        "name": "BaseBdev2",
00:14:06.908        "uuid": "072ade99-1b2c-5a1f-bd48-a362be38303b",
00:14:06.908        "is_configured": true,
00:14:06.908        "data_offset": 2048,
00:14:06.908        "data_size": 63488
00:14:06.908      },
00:14:06.908      {
00:14:06.908        "name": "BaseBdev3",
00:14:06.908        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:06.908        "is_configured": true,
00:14:06.908        "data_offset": 2048,
00:14:06.908        "data_size": 63488
00:14:06.908      },
00:14:06.908      {
00:14:06.908        "name": "BaseBdev4",
00:14:06.908        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:06.908        "is_configured": true,
00:14:06.908        "data_offset": 2048,
00:14:06.908        "data_size": 63488
00:14:06.908      }
00:14:06.908    ]
00:14:06.908  }'
00:14:06.908    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:06.908    11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:06.908  [2024-12-16 11:35:32.809144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:06.908  [2024-12-16 11:35:32.812603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430
00:14:06.908  [2024-12-16 11:35:32.814601] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:06.908   11:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1
00:14:07.844   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:07.844   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:07.844   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:07.844   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:07.844   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:07.844    11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:07.844    11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:07.844    11:35:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:07.844    11:35:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:07.844    11:35:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:07.844   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:07.844    "name": "raid_bdev1",
00:14:07.844    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:07.844    "strip_size_kb": 0,
00:14:07.844    "state": "online",
00:14:07.844    "raid_level": "raid1",
00:14:07.844    "superblock": true,
00:14:07.844    "num_base_bdevs": 4,
00:14:07.844    "num_base_bdevs_discovered": 4,
00:14:07.844    "num_base_bdevs_operational": 4,
00:14:07.844    "process": {
00:14:07.844      "type": "rebuild",
00:14:07.844      "target": "spare",
00:14:07.844      "progress": {
00:14:07.844        "blocks": 20480,
00:14:07.844        "percent": 32
00:14:07.844      }
00:14:07.844    },
00:14:07.844    "base_bdevs_list": [
00:14:07.844      {
00:14:07.844        "name": "spare",
00:14:07.844        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:07.844        "is_configured": true,
00:14:07.844        "data_offset": 2048,
00:14:07.844        "data_size": 63488
00:14:07.844      },
00:14:07.844      {
00:14:07.844        "name": "BaseBdev2",
00:14:07.844        "uuid": "072ade99-1b2c-5a1f-bd48-a362be38303b",
00:14:07.844        "is_configured": true,
00:14:07.844        "data_offset": 2048,
00:14:07.844        "data_size": 63488
00:14:07.844      },
00:14:07.844      {
00:14:07.844        "name": "BaseBdev3",
00:14:07.844        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:07.844        "is_configured": true,
00:14:07.844        "data_offset": 2048,
00:14:07.844        "data_size": 63488
00:14:07.844      },
00:14:07.844      {
00:14:07.844        "name": "BaseBdev4",
00:14:07.844        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:07.844        "is_configured": true,
00:14:07.844        "data_offset": 2048,
00:14:07.844        "data_size": 63488
00:14:07.844      }
00:14:07.844    ]
00:14:07.844  }'
00:14:07.844    11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:08.104    11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:14:08.104  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']'
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:08.104   11:35:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:08.104  [2024-12-16 11:35:33.973147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:08.104  [2024-12-16 11:35:34.119168] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]=
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- ))
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:08.104   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:08.104    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:08.104    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:08.104    11:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:08.104    11:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:08.104    11:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:08.364    "name": "raid_bdev1",
00:14:08.364    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:08.364    "strip_size_kb": 0,
00:14:08.364    "state": "online",
00:14:08.364    "raid_level": "raid1",
00:14:08.364    "superblock": true,
00:14:08.364    "num_base_bdevs": 4,
00:14:08.364    "num_base_bdevs_discovered": 3,
00:14:08.364    "num_base_bdevs_operational": 3,
00:14:08.364    "process": {
00:14:08.364      "type": "rebuild",
00:14:08.364      "target": "spare",
00:14:08.364      "progress": {
00:14:08.364        "blocks": 24576,
00:14:08.364        "percent": 38
00:14:08.364      }
00:14:08.364    },
00:14:08.364    "base_bdevs_list": [
00:14:08.364      {
00:14:08.364        "name": "spare",
00:14:08.364        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:08.364        "is_configured": true,
00:14:08.364        "data_offset": 2048,
00:14:08.364        "data_size": 63488
00:14:08.364      },
00:14:08.364      {
00:14:08.364        "name": null,
00:14:08.364        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:08.364        "is_configured": false,
00:14:08.364        "data_offset": 0,
00:14:08.364        "data_size": 63488
00:14:08.364      },
00:14:08.364      {
00:14:08.364        "name": "BaseBdev3",
00:14:08.364        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:08.364        "is_configured": true,
00:14:08.364        "data_offset": 2048,
00:14:08.364        "data_size": 63488
00:14:08.364      },
00:14:08.364      {
00:14:08.364        "name": "BaseBdev4",
00:14:08.364        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:08.364        "is_configured": true,
00:14:08.364        "data_offset": 2048,
00:14:08.364        "data_size": 63488
00:14:08.364      }
00:14:08.364    ]
00:14:08.364  }'
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=387
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:08.364    "name": "raid_bdev1",
00:14:08.364    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:08.364    "strip_size_kb": 0,
00:14:08.364    "state": "online",
00:14:08.364    "raid_level": "raid1",
00:14:08.364    "superblock": true,
00:14:08.364    "num_base_bdevs": 4,
00:14:08.364    "num_base_bdevs_discovered": 3,
00:14:08.364    "num_base_bdevs_operational": 3,
00:14:08.364    "process": {
00:14:08.364      "type": "rebuild",
00:14:08.364      "target": "spare",
00:14:08.364      "progress": {
00:14:08.364        "blocks": 26624,
00:14:08.364        "percent": 41
00:14:08.364      }
00:14:08.364    },
00:14:08.364    "base_bdevs_list": [
00:14:08.364      {
00:14:08.364        "name": "spare",
00:14:08.364        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:08.364        "is_configured": true,
00:14:08.364        "data_offset": 2048,
00:14:08.364        "data_size": 63488
00:14:08.364      },
00:14:08.364      {
00:14:08.364        "name": null,
00:14:08.364        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:08.364        "is_configured": false,
00:14:08.364        "data_offset": 0,
00:14:08.364        "data_size": 63488
00:14:08.364      },
00:14:08.364      {
00:14:08.364        "name": "BaseBdev3",
00:14:08.364        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:08.364        "is_configured": true,
00:14:08.364        "data_offset": 2048,
00:14:08.364        "data_size": 63488
00:14:08.364      },
00:14:08.364      {
00:14:08.364        "name": "BaseBdev4",
00:14:08.364        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:08.364        "is_configured": true,
00:14:08.364        "data_offset": 2048,
00:14:08.364        "data_size": 63488
00:14:08.364      }
00:14:08.364    ]
00:14:08.364  }'
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:08.364    11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:08.364   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:08.365   11:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:09.741    11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:09.741    11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:09.741    11:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:09.741    11:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:09.741    11:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:09.741    "name": "raid_bdev1",
00:14:09.741    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:09.741    "strip_size_kb": 0,
00:14:09.741    "state": "online",
00:14:09.741    "raid_level": "raid1",
00:14:09.741    "superblock": true,
00:14:09.741    "num_base_bdevs": 4,
00:14:09.741    "num_base_bdevs_discovered": 3,
00:14:09.741    "num_base_bdevs_operational": 3,
00:14:09.741    "process": {
00:14:09.741      "type": "rebuild",
00:14:09.741      "target": "spare",
00:14:09.741      "progress": {
00:14:09.741        "blocks": 49152,
00:14:09.741        "percent": 77
00:14:09.741      }
00:14:09.741    },
00:14:09.741    "base_bdevs_list": [
00:14:09.741      {
00:14:09.741        "name": "spare",
00:14:09.741        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:09.741        "is_configured": true,
00:14:09.741        "data_offset": 2048,
00:14:09.741        "data_size": 63488
00:14:09.741      },
00:14:09.741      {
00:14:09.741        "name": null,
00:14:09.741        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:09.741        "is_configured": false,
00:14:09.741        "data_offset": 0,
00:14:09.741        "data_size": 63488
00:14:09.741      },
00:14:09.741      {
00:14:09.741        "name": "BaseBdev3",
00:14:09.741        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:09.741        "is_configured": true,
00:14:09.741        "data_offset": 2048,
00:14:09.741        "data_size": 63488
00:14:09.741      },
00:14:09.741      {
00:14:09.741        "name": "BaseBdev4",
00:14:09.741        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:09.741        "is_configured": true,
00:14:09.741        "data_offset": 2048,
00:14:09.741        "data_size": 63488
00:14:09.741      }
00:14:09.741    ]
00:14:09.741  }'
00:14:09.741    11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:09.741    11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:09.741   11:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:10.001  [2024-12-16 11:35:36.027024] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:14:10.001  [2024-12-16 11:35:36.027113] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:14:10.001  [2024-12-16 11:35:36.027274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:10.570   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:10.570   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:10.570   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:10.570   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:10.570   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:10.570   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:10.570    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:10.570    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:10.570    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:10.570    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:10.570    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:10.570   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:10.570    "name": "raid_bdev1",
00:14:10.570    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:10.570    "strip_size_kb": 0,
00:14:10.570    "state": "online",
00:14:10.570    "raid_level": "raid1",
00:14:10.570    "superblock": true,
00:14:10.570    "num_base_bdevs": 4,
00:14:10.570    "num_base_bdevs_discovered": 3,
00:14:10.570    "num_base_bdevs_operational": 3,
00:14:10.570    "base_bdevs_list": [
00:14:10.570      {
00:14:10.570        "name": "spare",
00:14:10.570        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:10.570        "is_configured": true,
00:14:10.570        "data_offset": 2048,
00:14:10.570        "data_size": 63488
00:14:10.570      },
00:14:10.570      {
00:14:10.570        "name": null,
00:14:10.570        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:10.570        "is_configured": false,
00:14:10.570        "data_offset": 0,
00:14:10.570        "data_size": 63488
00:14:10.570      },
00:14:10.570      {
00:14:10.570        "name": "BaseBdev3",
00:14:10.570        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:10.570        "is_configured": true,
00:14:10.570        "data_offset": 2048,
00:14:10.570        "data_size": 63488
00:14:10.570      },
00:14:10.570      {
00:14:10.570        "name": "BaseBdev4",
00:14:10.570        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:10.570        "is_configured": true,
00:14:10.570        "data_offset": 2048,
00:14:10.570        "data_size": 63488
00:14:10.570      }
00:14:10.570    ]
00:14:10.570  }'
00:14:10.570    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:14:10.830    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:10.830   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:10.831    "name": "raid_bdev1",
00:14:10.831    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:10.831    "strip_size_kb": 0,
00:14:10.831    "state": "online",
00:14:10.831    "raid_level": "raid1",
00:14:10.831    "superblock": true,
00:14:10.831    "num_base_bdevs": 4,
00:14:10.831    "num_base_bdevs_discovered": 3,
00:14:10.831    "num_base_bdevs_operational": 3,
00:14:10.831    "base_bdevs_list": [
00:14:10.831      {
00:14:10.831        "name": "spare",
00:14:10.831        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:10.831        "is_configured": true,
00:14:10.831        "data_offset": 2048,
00:14:10.831        "data_size": 63488
00:14:10.831      },
00:14:10.831      {
00:14:10.831        "name": null,
00:14:10.831        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:10.831        "is_configured": false,
00:14:10.831        "data_offset": 0,
00:14:10.831        "data_size": 63488
00:14:10.831      },
00:14:10.831      {
00:14:10.831        "name": "BaseBdev3",
00:14:10.831        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:10.831        "is_configured": true,
00:14:10.831        "data_offset": 2048,
00:14:10.831        "data_size": 63488
00:14:10.831      },
00:14:10.831      {
00:14:10.831        "name": "BaseBdev4",
00:14:10.831        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:10.831        "is_configured": true,
00:14:10.831        "data_offset": 2048,
00:14:10.831        "data_size": 63488
00:14:10.831      }
00:14:10.831    ]
00:14:10.831  }'
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:10.831    11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:10.831    "name": "raid_bdev1",
00:14:10.831    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:10.831    "strip_size_kb": 0,
00:14:10.831    "state": "online",
00:14:10.831    "raid_level": "raid1",
00:14:10.831    "superblock": true,
00:14:10.831    "num_base_bdevs": 4,
00:14:10.831    "num_base_bdevs_discovered": 3,
00:14:10.831    "num_base_bdevs_operational": 3,
00:14:10.831    "base_bdevs_list": [
00:14:10.831      {
00:14:10.831        "name": "spare",
00:14:10.831        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:10.831        "is_configured": true,
00:14:10.831        "data_offset": 2048,
00:14:10.831        "data_size": 63488
00:14:10.831      },
00:14:10.831      {
00:14:10.831        "name": null,
00:14:10.831        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:10.831        "is_configured": false,
00:14:10.831        "data_offset": 0,
00:14:10.831        "data_size": 63488
00:14:10.831      },
00:14:10.831      {
00:14:10.831        "name": "BaseBdev3",
00:14:10.831        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:10.831        "is_configured": true,
00:14:10.831        "data_offset": 2048,
00:14:10.831        "data_size": 63488
00:14:10.831      },
00:14:10.831      {
00:14:10.831        "name": "BaseBdev4",
00:14:10.831        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:10.831        "is_configured": true,
00:14:10.831        "data_offset": 2048,
00:14:10.831        "data_size": 63488
00:14:10.831      }
00:14:10.831    ]
00:14:10.831  }'
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:10.831   11:35:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:11.399   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:14:11.399   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:11.399   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:11.399  [2024-12-16 11:35:37.277197] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:11.399  [2024-12-16 11:35:37.277298] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:11.399  [2024-12-16 11:35:37.277417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:11.399  [2024-12-16 11:35:37.277506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:11.399  [2024-12-16 11:35:37.277521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:14:11.399   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:11.400    11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:11.400    11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length
00:14:11.400    11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:11.400    11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:11.400    11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:14:11.400   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:14:11.659  /dev/nbd0
00:14:11.659    11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:11.659  1+0 records in
00:14:11.659  1+0 records out
00:14:11.659  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419944 s, 9.8 MB/s
00:14:11.659    11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:14:11.659   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:14:11.919  /dev/nbd1
00:14:11.919    11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:11.919  1+0 records in
00:14:11.919  1+0 records out
00:14:11.919  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332121 s, 12.3 MB/s
00:14:11.919    11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:11.919   11:35:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:14:12.179    11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:12.179   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:14:12.444    11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:12.444  [2024-12-16 11:35:38.423928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:14:12.444  [2024-12-16 11:35:38.423996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:12.444  [2024-12-16 11:35:38.424020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:14:12.444  [2024-12-16 11:35:38.424037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:12.444  [2024-12-16 11:35:38.426461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:12.444  [2024-12-16 11:35:38.426509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:14:12.444  [2024-12-16 11:35:38.426616] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:14:12.444  [2024-12-16 11:35:38.426669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:12.444  [2024-12-16 11:35:38.426806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:12.444  [2024-12-16 11:35:38.426917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:14:12.444  spare
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:12.444   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:12.710  [2024-12-16 11:35:38.526818] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:14:12.710  [2024-12-16 11:35:38.526856] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:12.710  [2024-12-16 11:35:38.527181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0
00:14:12.710  [2024-12-16 11:35:38.527355] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:14:12.710  [2024-12-16 11:35:38.527366] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:14:12.710  [2024-12-16 11:35:38.527518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:12.710    11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:12.710    11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:12.710    11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:12.710    11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:12.710    11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:12.710    "name": "raid_bdev1",
00:14:12.710    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:12.710    "strip_size_kb": 0,
00:14:12.710    "state": "online",
00:14:12.710    "raid_level": "raid1",
00:14:12.710    "superblock": true,
00:14:12.710    "num_base_bdevs": 4,
00:14:12.710    "num_base_bdevs_discovered": 3,
00:14:12.710    "num_base_bdevs_operational": 3,
00:14:12.710    "base_bdevs_list": [
00:14:12.710      {
00:14:12.710        "name": "spare",
00:14:12.710        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:12.710        "is_configured": true,
00:14:12.710        "data_offset": 2048,
00:14:12.710        "data_size": 63488
00:14:12.710      },
00:14:12.710      {
00:14:12.710        "name": null,
00:14:12.710        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:12.710        "is_configured": false,
00:14:12.710        "data_offset": 2048,
00:14:12.710        "data_size": 63488
00:14:12.710      },
00:14:12.710      {
00:14:12.710        "name": "BaseBdev3",
00:14:12.710        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:12.710        "is_configured": true,
00:14:12.710        "data_offset": 2048,
00:14:12.710        "data_size": 63488
00:14:12.710      },
00:14:12.710      {
00:14:12.710        "name": "BaseBdev4",
00:14:12.710        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:12.710        "is_configured": true,
00:14:12.710        "data_offset": 2048,
00:14:12.710        "data_size": 63488
00:14:12.710      }
00:14:12.710    ]
00:14:12.710  }'
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:12.710   11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:12.970   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:12.971   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:12.971   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:12.971   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:12.971   11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:12.971    11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:12.971    11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:12.971    11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:12.971    11:35:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:12.971    11:35:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:12.971   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:12.971    "name": "raid_bdev1",
00:14:12.971    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:12.971    "strip_size_kb": 0,
00:14:12.971    "state": "online",
00:14:12.971    "raid_level": "raid1",
00:14:12.971    "superblock": true,
00:14:12.971    "num_base_bdevs": 4,
00:14:12.971    "num_base_bdevs_discovered": 3,
00:14:12.971    "num_base_bdevs_operational": 3,
00:14:12.971    "base_bdevs_list": [
00:14:12.971      {
00:14:12.971        "name": "spare",
00:14:12.971        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:12.971        "is_configured": true,
00:14:12.971        "data_offset": 2048,
00:14:12.971        "data_size": 63488
00:14:12.971      },
00:14:12.971      {
00:14:12.971        "name": null,
00:14:12.971        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:12.971        "is_configured": false,
00:14:12.971        "data_offset": 2048,
00:14:12.971        "data_size": 63488
00:14:12.971      },
00:14:12.971      {
00:14:12.971        "name": "BaseBdev3",
00:14:12.971        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:12.971        "is_configured": true,
00:14:12.971        "data_offset": 2048,
00:14:12.971        "data_size": 63488
00:14:12.971      },
00:14:12.971      {
00:14:12.971        "name": "BaseBdev4",
00:14:12.971        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:12.971        "is_configured": true,
00:14:12.971        "data_offset": 2048,
00:14:12.971        "data_size": 63488
00:14:12.971      }
00:14:12.971    ]
00:14:12.971  }'
00:14:12.971    11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:13.230   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:13.230    11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:13.230   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:13.230    11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:13.230    11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:13.230    11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:13.230    11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:14:13.231    11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:13.231  [2024-12-16 11:35:39.162983] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:13.231    11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:13.231    11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:13.231    11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:13.231    11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:13.231    11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:13.231    "name": "raid_bdev1",
00:14:13.231    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:13.231    "strip_size_kb": 0,
00:14:13.231    "state": "online",
00:14:13.231    "raid_level": "raid1",
00:14:13.231    "superblock": true,
00:14:13.231    "num_base_bdevs": 4,
00:14:13.231    "num_base_bdevs_discovered": 2,
00:14:13.231    "num_base_bdevs_operational": 2,
00:14:13.231    "base_bdevs_list": [
00:14:13.231      {
00:14:13.231        "name": null,
00:14:13.231        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:13.231        "is_configured": false,
00:14:13.231        "data_offset": 0,
00:14:13.231        "data_size": 63488
00:14:13.231      },
00:14:13.231      {
00:14:13.231        "name": null,
00:14:13.231        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:13.231        "is_configured": false,
00:14:13.231        "data_offset": 2048,
00:14:13.231        "data_size": 63488
00:14:13.231      },
00:14:13.231      {
00:14:13.231        "name": "BaseBdev3",
00:14:13.231        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:13.231        "is_configured": true,
00:14:13.231        "data_offset": 2048,
00:14:13.231        "data_size": 63488
00:14:13.231      },
00:14:13.231      {
00:14:13.231        "name": "BaseBdev4",
00:14:13.231        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:13.231        "is_configured": true,
00:14:13.231        "data_offset": 2048,
00:14:13.231        "data_size": 63488
00:14:13.231      }
00:14:13.231    ]
00:14:13.231  }'
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:13.231   11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:13.801   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:13.801   11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:13.801   11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:13.801  [2024-12-16 11:35:39.606279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:13.801  [2024-12-16 11:35:39.606550] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6)
00:14:13.801  [2024-12-16 11:35:39.606642] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:14:13.801  [2024-12-16 11:35:39.606708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:13.801  [2024-12-16 11:35:39.610012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0
00:14:13.801  [2024-12-16 11:35:39.612158] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:13.801   11:35:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:13.801   11:35:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:14.740    11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:14.740    11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:14.740    11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:14.740    11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:14.740    11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:14.740    "name": "raid_bdev1",
00:14:14.740    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:14.740    "strip_size_kb": 0,
00:14:14.740    "state": "online",
00:14:14.740    "raid_level": "raid1",
00:14:14.740    "superblock": true,
00:14:14.740    "num_base_bdevs": 4,
00:14:14.740    "num_base_bdevs_discovered": 3,
00:14:14.740    "num_base_bdevs_operational": 3,
00:14:14.740    "process": {
00:14:14.740      "type": "rebuild",
00:14:14.740      "target": "spare",
00:14:14.740      "progress": {
00:14:14.740        "blocks": 20480,
00:14:14.740        "percent": 32
00:14:14.740      }
00:14:14.740    },
00:14:14.740    "base_bdevs_list": [
00:14:14.740      {
00:14:14.740        "name": "spare",
00:14:14.740        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:14.740        "is_configured": true,
00:14:14.740        "data_offset": 2048,
00:14:14.740        "data_size": 63488
00:14:14.740      },
00:14:14.740      {
00:14:14.740        "name": null,
00:14:14.740        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:14.740        "is_configured": false,
00:14:14.740        "data_offset": 2048,
00:14:14.740        "data_size": 63488
00:14:14.740      },
00:14:14.740      {
00:14:14.740        "name": "BaseBdev3",
00:14:14.740        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:14.740        "is_configured": true,
00:14:14.740        "data_offset": 2048,
00:14:14.740        "data_size": 63488
00:14:14.740      },
00:14:14.740      {
00:14:14.740        "name": "BaseBdev4",
00:14:14.740        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:14.740        "is_configured": true,
00:14:14.740        "data_offset": 2048,
00:14:14.740        "data_size": 63488
00:14:14.740      }
00:14:14.740    ]
00:14:14.740  }'
00:14:14.740    11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:14.740    11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:14.740   11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:14.740  [2024-12-16 11:35:40.779467] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:15.001  [2024-12-16 11:35:40.816828] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:14:15.001  [2024-12-16 11:35:40.816894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:15.001  [2024-12-16 11:35:40.816909] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:15.001  [2024-12-16 11:35:40.816919] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:15.001    11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:15.001    11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:15.001    11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:15.001    11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:15.001    11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:15.001    "name": "raid_bdev1",
00:14:15.001    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:15.001    "strip_size_kb": 0,
00:14:15.001    "state": "online",
00:14:15.001    "raid_level": "raid1",
00:14:15.001    "superblock": true,
00:14:15.001    "num_base_bdevs": 4,
00:14:15.001    "num_base_bdevs_discovered": 2,
00:14:15.001    "num_base_bdevs_operational": 2,
00:14:15.001    "base_bdevs_list": [
00:14:15.001      {
00:14:15.001        "name": null,
00:14:15.001        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:15.001        "is_configured": false,
00:14:15.001        "data_offset": 0,
00:14:15.001        "data_size": 63488
00:14:15.001      },
00:14:15.001      {
00:14:15.001        "name": null,
00:14:15.001        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:15.001        "is_configured": false,
00:14:15.001        "data_offset": 2048,
00:14:15.001        "data_size": 63488
00:14:15.001      },
00:14:15.001      {
00:14:15.001        "name": "BaseBdev3",
00:14:15.001        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:15.001        "is_configured": true,
00:14:15.001        "data_offset": 2048,
00:14:15.001        "data_size": 63488
00:14:15.001      },
00:14:15.001      {
00:14:15.001        "name": "BaseBdev4",
00:14:15.001        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:15.001        "is_configured": true,
00:14:15.001        "data_offset": 2048,
00:14:15.001        "data_size": 63488
00:14:15.001      }
00:14:15.001    ]
00:14:15.001  }'
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:15.001   11:35:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:15.261   11:35:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:14:15.261   11:35:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:15.261   11:35:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:15.261  [2024-12-16 11:35:41.256271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:14:15.261  [2024-12-16 11:35:41.256411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:15.261  [2024-12-16 11:35:41.256469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:14:15.261  [2024-12-16 11:35:41.256506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:15.261  [2024-12-16 11:35:41.257033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:15.261  [2024-12-16 11:35:41.257073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:14:15.261  [2024-12-16 11:35:41.257170] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:14:15.261  [2024-12-16 11:35:41.257192] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6)
00:14:15.261  [2024-12-16 11:35:41.257208] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:14:15.261  [2024-12-16 11:35:41.257238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:15.261  spare
00:14:15.261  [2024-12-16 11:35:41.260570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80
00:14:15.261   11:35:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:15.261   11:35:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1
00:14:15.261  [2024-12-16 11:35:41.262628] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:16.642    "name": "raid_bdev1",
00:14:16.642    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:16.642    "strip_size_kb": 0,
00:14:16.642    "state": "online",
00:14:16.642    "raid_level": "raid1",
00:14:16.642    "superblock": true,
00:14:16.642    "num_base_bdevs": 4,
00:14:16.642    "num_base_bdevs_discovered": 3,
00:14:16.642    "num_base_bdevs_operational": 3,
00:14:16.642    "process": {
00:14:16.642      "type": "rebuild",
00:14:16.642      "target": "spare",
00:14:16.642      "progress": {
00:14:16.642        "blocks": 20480,
00:14:16.642        "percent": 32
00:14:16.642      }
00:14:16.642    },
00:14:16.642    "base_bdevs_list": [
00:14:16.642      {
00:14:16.642        "name": "spare",
00:14:16.642        "uuid": "06edf1b6-88a0-5daf-8a52-31ac3cc3eae8",
00:14:16.642        "is_configured": true,
00:14:16.642        "data_offset": 2048,
00:14:16.642        "data_size": 63488
00:14:16.642      },
00:14:16.642      {
00:14:16.642        "name": null,
00:14:16.642        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:16.642        "is_configured": false,
00:14:16.642        "data_offset": 2048,
00:14:16.642        "data_size": 63488
00:14:16.642      },
00:14:16.642      {
00:14:16.642        "name": "BaseBdev3",
00:14:16.642        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:16.642        "is_configured": true,
00:14:16.642        "data_offset": 2048,
00:14:16.642        "data_size": 63488
00:14:16.642      },
00:14:16.642      {
00:14:16.642        "name": "BaseBdev4",
00:14:16.642        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:16.642        "is_configured": true,
00:14:16.642        "data_offset": 2048,
00:14:16.642        "data_size": 63488
00:14:16.642      }
00:14:16.642    ]
00:14:16.642  }'
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:16.642  [2024-12-16 11:35:42.403522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:16.642  [2024-12-16 11:35:42.467385] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:14:16.642  [2024-12-16 11:35:42.467452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:16.642  [2024-12-16 11:35:42.467473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:16.642  [2024-12-16 11:35:42.467482] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:16.642    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:16.642    "name": "raid_bdev1",
00:14:16.642    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:16.642    "strip_size_kb": 0,
00:14:16.642    "state": "online",
00:14:16.642    "raid_level": "raid1",
00:14:16.642    "superblock": true,
00:14:16.642    "num_base_bdevs": 4,
00:14:16.642    "num_base_bdevs_discovered": 2,
00:14:16.642    "num_base_bdevs_operational": 2,
00:14:16.642    "base_bdevs_list": [
00:14:16.642      {
00:14:16.642        "name": null,
00:14:16.642        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:16.642        "is_configured": false,
00:14:16.642        "data_offset": 0,
00:14:16.642        "data_size": 63488
00:14:16.642      },
00:14:16.642      {
00:14:16.642        "name": null,
00:14:16.642        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:16.642        "is_configured": false,
00:14:16.642        "data_offset": 2048,
00:14:16.642        "data_size": 63488
00:14:16.642      },
00:14:16.642      {
00:14:16.642        "name": "BaseBdev3",
00:14:16.642        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:16.642        "is_configured": true,
00:14:16.642        "data_offset": 2048,
00:14:16.642        "data_size": 63488
00:14:16.642      },
00:14:16.642      {
00:14:16.642        "name": "BaseBdev4",
00:14:16.642        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:16.642        "is_configured": true,
00:14:16.642        "data_offset": 2048,
00:14:16.642        "data_size": 63488
00:14:16.642      }
00:14:16.642    ]
00:14:16.642  }'
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:16.642   11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:16.902   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:16.902   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:16.902   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:16.902   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:16.902   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:16.902    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:16.902    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:16.902    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:16.902    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:16.902    11:35:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:17.162   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:17.162    "name": "raid_bdev1",
00:14:17.162    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:17.162    "strip_size_kb": 0,
00:14:17.162    "state": "online",
00:14:17.162    "raid_level": "raid1",
00:14:17.162    "superblock": true,
00:14:17.162    "num_base_bdevs": 4,
00:14:17.162    "num_base_bdevs_discovered": 2,
00:14:17.162    "num_base_bdevs_operational": 2,
00:14:17.162    "base_bdevs_list": [
00:14:17.162      {
00:14:17.162        "name": null,
00:14:17.162        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:17.162        "is_configured": false,
00:14:17.162        "data_offset": 0,
00:14:17.162        "data_size": 63488
00:14:17.162      },
00:14:17.162      {
00:14:17.162        "name": null,
00:14:17.162        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:17.162        "is_configured": false,
00:14:17.162        "data_offset": 2048,
00:14:17.162        "data_size": 63488
00:14:17.162      },
00:14:17.162      {
00:14:17.162        "name": "BaseBdev3",
00:14:17.162        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:17.162        "is_configured": true,
00:14:17.162        "data_offset": 2048,
00:14:17.162        "data_size": 63488
00:14:17.162      },
00:14:17.162      {
00:14:17.162        "name": "BaseBdev4",
00:14:17.162        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:17.162        "is_configured": true,
00:14:17.162        "data_offset": 2048,
00:14:17.162        "data_size": 63488
00:14:17.162      }
00:14:17.162    ]
00:14:17.162  }'
00:14:17.162    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:17.162   11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:17.162    11:35:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:17.162  [2024-12-16 11:35:43.066637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:14:17.162  [2024-12-16 11:35:43.066694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:17.162  [2024-12-16 11:35:43.066716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980
00:14:17.162  [2024-12-16 11:35:43.066741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:17.162  [2024-12-16 11:35:43.067213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:17.162  [2024-12-16 11:35:43.067255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:14:17.162  [2024-12-16 11:35:43.067350] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:14:17.162  [2024-12-16 11:35:43.067382] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6)
00:14:17.162  [2024-12-16 11:35:43.067394] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:14:17.162  [2024-12-16 11:35:43.067406] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:14:17.162  BaseBdev1
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:17.162   11:35:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:18.100    11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:18.100    11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:18.100    11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:18.100    11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:18.100    11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:18.100    "name": "raid_bdev1",
00:14:18.100    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:18.100    "strip_size_kb": 0,
00:14:18.100    "state": "online",
00:14:18.100    "raid_level": "raid1",
00:14:18.100    "superblock": true,
00:14:18.100    "num_base_bdevs": 4,
00:14:18.100    "num_base_bdevs_discovered": 2,
00:14:18.100    "num_base_bdevs_operational": 2,
00:14:18.100    "base_bdevs_list": [
00:14:18.100      {
00:14:18.100        "name": null,
00:14:18.100        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:18.100        "is_configured": false,
00:14:18.100        "data_offset": 0,
00:14:18.100        "data_size": 63488
00:14:18.100      },
00:14:18.100      {
00:14:18.100        "name": null,
00:14:18.100        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:18.100        "is_configured": false,
00:14:18.100        "data_offset": 2048,
00:14:18.100        "data_size": 63488
00:14:18.100      },
00:14:18.100      {
00:14:18.100        "name": "BaseBdev3",
00:14:18.100        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:18.100        "is_configured": true,
00:14:18.100        "data_offset": 2048,
00:14:18.100        "data_size": 63488
00:14:18.100      },
00:14:18.100      {
00:14:18.100        "name": "BaseBdev4",
00:14:18.100        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:18.100        "is_configured": true,
00:14:18.100        "data_offset": 2048,
00:14:18.100        "data_size": 63488
00:14:18.100      }
00:14:18.100    ]
00:14:18.100  }'
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:18.100   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:18.690   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:18.690   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:18.690   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:18.690   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:18.690   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:18.690    11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:18.690    11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:18.690    11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:18.690    11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:18.690    11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:18.690   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:18.690    "name": "raid_bdev1",
00:14:18.690    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:18.690    "strip_size_kb": 0,
00:14:18.690    "state": "online",
00:14:18.690    "raid_level": "raid1",
00:14:18.690    "superblock": true,
00:14:18.690    "num_base_bdevs": 4,
00:14:18.690    "num_base_bdevs_discovered": 2,
00:14:18.690    "num_base_bdevs_operational": 2,
00:14:18.690    "base_bdevs_list": [
00:14:18.690      {
00:14:18.690        "name": null,
00:14:18.690        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:18.690        "is_configured": false,
00:14:18.690        "data_offset": 0,
00:14:18.690        "data_size": 63488
00:14:18.690      },
00:14:18.690      {
00:14:18.690        "name": null,
00:14:18.690        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:18.690        "is_configured": false,
00:14:18.690        "data_offset": 2048,
00:14:18.690        "data_size": 63488
00:14:18.690      },
00:14:18.690      {
00:14:18.690        "name": "BaseBdev3",
00:14:18.690        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:18.690        "is_configured": true,
00:14:18.690        "data_offset": 2048,
00:14:18.690        "data_size": 63488
00:14:18.690      },
00:14:18.690      {
00:14:18.690        "name": "BaseBdev4",
00:14:18.690        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:18.690        "is_configured": true,
00:14:18.690        "data_offset": 2048,
00:14:18.690        "data_size": 63488
00:14:18.690      }
00:14:18.690    ]
00:14:18.690  }'
00:14:18.690    11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:18.691    11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:18.691    11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:18.691  [2024-12-16 11:35:44.644029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:18.691  [2024-12-16 11:35:44.644258] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6)
00:14:18.691  [2024-12-16 11:35:44.644330] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:14:18.691  request:
00:14:18.691  {
00:14:18.691  "base_bdev": "BaseBdev1",
00:14:18.691  "raid_bdev": "raid_bdev1",
00:14:18.691  "method": "bdev_raid_add_base_bdev",
00:14:18.691  "req_id": 1
00:14:18.691  }
00:14:18.691  Got JSON-RPC error response
00:14:18.691  response:
00:14:18.691  {
00:14:18.691  "code": -22,
00:14:18.691  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:14:18.691  }
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:14:18.691   11:35:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:19.629   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:19.629    11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:19.629    11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:19.629    11:35:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:19.629    11:35:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:19.629    11:35:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:19.888   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:19.888    "name": "raid_bdev1",
00:14:19.888    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:19.888    "strip_size_kb": 0,
00:14:19.888    "state": "online",
00:14:19.888    "raid_level": "raid1",
00:14:19.888    "superblock": true,
00:14:19.888    "num_base_bdevs": 4,
00:14:19.888    "num_base_bdevs_discovered": 2,
00:14:19.888    "num_base_bdevs_operational": 2,
00:14:19.888    "base_bdevs_list": [
00:14:19.888      {
00:14:19.888        "name": null,
00:14:19.888        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:19.888        "is_configured": false,
00:14:19.888        "data_offset": 0,
00:14:19.888        "data_size": 63488
00:14:19.888      },
00:14:19.888      {
00:14:19.888        "name": null,
00:14:19.888        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:19.888        "is_configured": false,
00:14:19.888        "data_offset": 2048,
00:14:19.888        "data_size": 63488
00:14:19.888      },
00:14:19.888      {
00:14:19.888        "name": "BaseBdev3",
00:14:19.888        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:19.888        "is_configured": true,
00:14:19.888        "data_offset": 2048,
00:14:19.888        "data_size": 63488
00:14:19.888      },
00:14:19.888      {
00:14:19.888        "name": "BaseBdev4",
00:14:19.888        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:19.888        "is_configured": true,
00:14:19.888        "data_offset": 2048,
00:14:19.888        "data_size": 63488
00:14:19.888      }
00:14:19.888    ]
00:14:19.888  }'
00:14:19.888   11:35:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:19.888   11:35:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:20.148   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:20.148   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:20.148   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:20.148   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:20.148   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:20.148    11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:20.148    11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:20.148    11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:20.148    11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:20.148    11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:20.148   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:20.148    "name": "raid_bdev1",
00:14:20.148    "uuid": "79eba291-e3ba-443e-b2ad-76b48a08a4c9",
00:14:20.148    "strip_size_kb": 0,
00:14:20.148    "state": "online",
00:14:20.148    "raid_level": "raid1",
00:14:20.148    "superblock": true,
00:14:20.148    "num_base_bdevs": 4,
00:14:20.148    "num_base_bdevs_discovered": 2,
00:14:20.148    "num_base_bdevs_operational": 2,
00:14:20.148    "base_bdevs_list": [
00:14:20.148      {
00:14:20.148        "name": null,
00:14:20.148        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:20.148        "is_configured": false,
00:14:20.148        "data_offset": 0,
00:14:20.148        "data_size": 63488
00:14:20.148      },
00:14:20.148      {
00:14:20.148        "name": null,
00:14:20.148        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:20.148        "is_configured": false,
00:14:20.148        "data_offset": 2048,
00:14:20.148        "data_size": 63488
00:14:20.148      },
00:14:20.148      {
00:14:20.148        "name": "BaseBdev3",
00:14:20.148        "uuid": "9495898e-2bff-5729-a3a8-1e4427ac2112",
00:14:20.149        "is_configured": true,
00:14:20.149        "data_offset": 2048,
00:14:20.149        "data_size": 63488
00:14:20.149      },
00:14:20.149      {
00:14:20.149        "name": "BaseBdev4",
00:14:20.149        "uuid": "1df3c0d1-9a12-53c8-a6af-579f501564ef",
00:14:20.149        "is_configured": true,
00:14:20.149        "data_offset": 2048,
00:14:20.149        "data_size": 63488
00:14:20.149      }
00:14:20.149    ]
00:14:20.149  }'
00:14:20.149    11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:20.408   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:20.408    11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:20.408   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:20.408   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88978
00:14:20.408   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88978 ']'
00:14:20.408   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88978
00:14:20.408    11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname
00:14:20.408   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:14:20.408    11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88978
00:14:20.408   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:14:20.408  killing process with pid 88978
00:14:20.408  Received shutdown signal, test time was about 60.000000 seconds
00:14:20.408  
00:14:20.409                                                                                                  Latency(us)
00:14:20.409  
[2024-12-16T11:35:46.476Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:20.409  
[2024-12-16T11:35:46.476Z]  ===================================================================================================================
00:14:20.409  
[2024-12-16T11:35:46.476Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:14:20.409   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:14:20.409   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88978'
00:14:20.409   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88978
00:14:20.409  [2024-12-16 11:35:46.321058] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:20.409  [2024-12-16 11:35:46.321195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:20.409   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88978
00:14:20.409  [2024-12-16 11:35:46.321280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:20.409  [2024-12-16 11:35:46.321295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:14:20.409  [2024-12-16 11:35:46.373113] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:20.669  ************************************
00:14:20.669  END TEST raid_rebuild_test_sb
00:14:20.669  ************************************
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0
00:14:20.669  
00:14:20.669  real	0m23.279s
00:14:20.669  user	0m28.815s
00:14:20.669  sys	0m3.568s
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:14:20.669   11:35:46 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true
00:14:20.669   11:35:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:14:20.669   11:35:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:14:20.669   11:35:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:14:20.669  ************************************
00:14:20.669  START TEST raid_rebuild_test_io
00:14:20.669  ************************************
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:20.669    11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']'
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89716
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89716
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89716 ']'
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:20.669  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable
00:14:20.669   11:35:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:20.929  [2024-12-16 11:35:46.789781] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:14:20.929  [2024-12-16 11:35:46.790020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89716 ]
00:14:20.929  I/O size of 3145728 is greater than zero copy threshold (65536).
00:14:20.929  Zero copy mechanism will not be used.
00:14:20.929  [2024-12-16 11:35:46.943340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:20.929  [2024-12-16 11:35:46.991182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:14:21.188  [2024-12-16 11:35:47.033768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:21.188  [2024-12-16 11:35:47.033895] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.757  BaseBdev1_malloc
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.757  [2024-12-16 11:35:47.671907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:14:21.757  [2024-12-16 11:35:47.671977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:21.757  [2024-12-16 11:35:47.672016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:14:21.757  [2024-12-16 11:35:47.672039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:21.757  [2024-12-16 11:35:47.674314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:21.757  [2024-12-16 11:35:47.674351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:14:21.757  BaseBdev1
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.757  BaseBdev2_malloc
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.757   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.757  [2024-12-16 11:35:47.710966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:14:21.758  [2024-12-16 11:35:47.711089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:21.758  [2024-12-16 11:35:47.711121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:14:21.758  [2024-12-16 11:35:47.711134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:21.758  [2024-12-16 11:35:47.713478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:21.758  [2024-12-16 11:35:47.713518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:14:21.758  BaseBdev2
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  BaseBdev3_malloc
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  [2024-12-16 11:35:47.739702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:14:21.758  [2024-12-16 11:35:47.739755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:21.758  [2024-12-16 11:35:47.739798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:14:21.758  [2024-12-16 11:35:47.739808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:21.758  [2024-12-16 11:35:47.742058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:21.758  [2024-12-16 11:35:47.742148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:14:21.758  BaseBdev3
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  BaseBdev4_malloc
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  [2024-12-16 11:35:47.760319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:14:21.758  [2024-12-16 11:35:47.760379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:21.758  [2024-12-16 11:35:47.760405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:14:21.758  [2024-12-16 11:35:47.760414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:21.758  [2024-12-16 11:35:47.762633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:21.758  [2024-12-16 11:35:47.762669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:14:21.758  BaseBdev4
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  spare_malloc
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  spare_delay
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  [2024-12-16 11:35:47.796877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:14:21.758  [2024-12-16 11:35:47.796972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:21.758  [2024-12-16 11:35:47.797015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:14:21.758  [2024-12-16 11:35:47.797024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:21.758  [2024-12-16 11:35:47.799167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:21.758  [2024-12-16 11:35:47.799262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:14:21.758  spare
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:21.758  [2024-12-16 11:35:47.808935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:21.758  [2024-12-16 11:35:47.810800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:21.758  [2024-12-16 11:35:47.810869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:21.758  [2024-12-16 11:35:47.810912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:14:21.758  [2024-12-16 11:35:47.810989] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:14:21.758  [2024-12-16 11:35:47.810999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:14:21.758  [2024-12-16 11:35:47.811263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:14:21.758  [2024-12-16 11:35:47.811395] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:14:21.758  [2024-12-16 11:35:47.811409] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:14:21.758  [2024-12-16 11:35:47.811550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:21.758   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:21.758    11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:22.018    11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:22.018    11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:22.018    11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.018    11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:22.018   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:22.018    "name": "raid_bdev1",
00:14:22.018    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:22.018    "strip_size_kb": 0,
00:14:22.018    "state": "online",
00:14:22.018    "raid_level": "raid1",
00:14:22.018    "superblock": false,
00:14:22.018    "num_base_bdevs": 4,
00:14:22.018    "num_base_bdevs_discovered": 4,
00:14:22.018    "num_base_bdevs_operational": 4,
00:14:22.018    "base_bdevs_list": [
00:14:22.018      {
00:14:22.018        "name": "BaseBdev1",
00:14:22.018        "uuid": "c79dc160-cbc0-5f62-ac93-e3a618e108fd",
00:14:22.018        "is_configured": true,
00:14:22.018        "data_offset": 0,
00:14:22.018        "data_size": 65536
00:14:22.018      },
00:14:22.018      {
00:14:22.018        "name": "BaseBdev2",
00:14:22.018        "uuid": "b1bbf6f5-65f6-58e5-8469-a4a92049b991",
00:14:22.018        "is_configured": true,
00:14:22.018        "data_offset": 0,
00:14:22.018        "data_size": 65536
00:14:22.018      },
00:14:22.018      {
00:14:22.018        "name": "BaseBdev3",
00:14:22.018        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:22.018        "is_configured": true,
00:14:22.018        "data_offset": 0,
00:14:22.018        "data_size": 65536
00:14:22.018      },
00:14:22.018      {
00:14:22.018        "name": "BaseBdev4",
00:14:22.018        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:22.018        "is_configured": true,
00:14:22.018        "data_offset": 0,
00:14:22.018        "data_size": 65536
00:14:22.018      }
00:14:22.018    ]
00:14:22.018  }'
00:14:22.018   11:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:22.018   11:35:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.278  [2024-12-16 11:35:48.296429] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:22.278   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.278    11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:14:22.538    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']'
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.538  [2024-12-16 11:35:48.379943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:22.538    11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:22.538    11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:22.538    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:22.538    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.538    11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:22.538    "name": "raid_bdev1",
00:14:22.538    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:22.538    "strip_size_kb": 0,
00:14:22.538    "state": "online",
00:14:22.538    "raid_level": "raid1",
00:14:22.538    "superblock": false,
00:14:22.538    "num_base_bdevs": 4,
00:14:22.538    "num_base_bdevs_discovered": 3,
00:14:22.538    "num_base_bdevs_operational": 3,
00:14:22.538    "base_bdevs_list": [
00:14:22.538      {
00:14:22.538        "name": null,
00:14:22.538        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:22.538        "is_configured": false,
00:14:22.538        "data_offset": 0,
00:14:22.538        "data_size": 65536
00:14:22.538      },
00:14:22.538      {
00:14:22.538        "name": "BaseBdev2",
00:14:22.538        "uuid": "b1bbf6f5-65f6-58e5-8469-a4a92049b991",
00:14:22.538        "is_configured": true,
00:14:22.538        "data_offset": 0,
00:14:22.538        "data_size": 65536
00:14:22.538      },
00:14:22.538      {
00:14:22.538        "name": "BaseBdev3",
00:14:22.538        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:22.538        "is_configured": true,
00:14:22.538        "data_offset": 0,
00:14:22.538        "data_size": 65536
00:14:22.538      },
00:14:22.538      {
00:14:22.538        "name": "BaseBdev4",
00:14:22.538        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:22.538        "is_configured": true,
00:14:22.538        "data_offset": 0,
00:14:22.538        "data_size": 65536
00:14:22.538      }
00:14:22.538    ]
00:14:22.538  }'
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:22.538   11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.538  [2024-12-16 11:35:48.477859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:14:22.538  I/O size of 3145728 is greater than zero copy threshold (65536).
00:14:22.538  Zero copy mechanism will not be used.
00:14:22.538  Running I/O for 60 seconds...
00:14:22.798   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:22.798   11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:22.798   11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:22.798  [2024-12-16 11:35:48.831828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:22.798   11:35:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:22.798   11:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1
00:14:23.057  [2024-12-16 11:35:48.886529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150
00:14:23.057  [2024-12-16 11:35:48.889123] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:23.057  [2024-12-16 11:35:49.008699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:14:23.057  [2024-12-16 11:35:49.010723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:14:23.316  [2024-12-16 11:35:49.243048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:23.316  [2024-12-16 11:35:49.244283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:23.834        176.00 IOPS,   528.00 MiB/s
[2024-12-16T11:35:49.901Z] [2024-12-16 11:35:49.671647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:14:23.834   11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:23.834   11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:23.834   11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:23.834   11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:23.834   11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:23.834    11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:23.834    11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:23.834    11:35:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:23.834    11:35:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:23.834    11:35:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:24.094   11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:24.094    "name": "raid_bdev1",
00:14:24.094    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:24.094    "strip_size_kb": 0,
00:14:24.094    "state": "online",
00:14:24.094    "raid_level": "raid1",
00:14:24.094    "superblock": false,
00:14:24.094    "num_base_bdevs": 4,
00:14:24.094    "num_base_bdevs_discovered": 4,
00:14:24.094    "num_base_bdevs_operational": 4,
00:14:24.094    "process": {
00:14:24.094      "type": "rebuild",
00:14:24.094      "target": "spare",
00:14:24.094      "progress": {
00:14:24.094        "blocks": 8192,
00:14:24.094        "percent": 12
00:14:24.094      }
00:14:24.094    },
00:14:24.094    "base_bdevs_list": [
00:14:24.094      {
00:14:24.094        "name": "spare",
00:14:24.094        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:24.094        "is_configured": true,
00:14:24.094        "data_offset": 0,
00:14:24.094        "data_size": 65536
00:14:24.094      },
00:14:24.094      {
00:14:24.094        "name": "BaseBdev2",
00:14:24.094        "uuid": "b1bbf6f5-65f6-58e5-8469-a4a92049b991",
00:14:24.094        "is_configured": true,
00:14:24.094        "data_offset": 0,
00:14:24.094        "data_size": 65536
00:14:24.094      },
00:14:24.094      {
00:14:24.094        "name": "BaseBdev3",
00:14:24.094        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:24.094        "is_configured": true,
00:14:24.094        "data_offset": 0,
00:14:24.094        "data_size": 65536
00:14:24.094      },
00:14:24.094      {
00:14:24.094        "name": "BaseBdev4",
00:14:24.094        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:24.094        "is_configured": true,
00:14:24.094        "data_offset": 0,
00:14:24.094        "data_size": 65536
00:14:24.094      }
00:14:24.094    ]
00:14:24.094  }'
00:14:24.094    11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:24.094  [2024-12-16 11:35:49.920526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:14:24.094   11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:24.094    11:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:24.094  [2024-12-16 11:35:50.009801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:24.094  [2024-12-16 11:35:50.089579] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:14:24.094  [2024-12-16 11:35:50.092778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:24.094  [2024-12-16 11:35:50.092897] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:24.094  [2024-12-16 11:35:50.092924] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:14:24.094  [2024-12-16 11:35:50.098844] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:24.094   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:24.094    11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:24.094    11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:24.094    11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:24.094    11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:24.094    11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:24.354   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:24.354    "name": "raid_bdev1",
00:14:24.354    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:24.354    "strip_size_kb": 0,
00:14:24.354    "state": "online",
00:14:24.354    "raid_level": "raid1",
00:14:24.354    "superblock": false,
00:14:24.354    "num_base_bdevs": 4,
00:14:24.354    "num_base_bdevs_discovered": 3,
00:14:24.354    "num_base_bdevs_operational": 3,
00:14:24.354    "base_bdevs_list": [
00:14:24.354      {
00:14:24.354        "name": null,
00:14:24.354        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:24.354        "is_configured": false,
00:14:24.354        "data_offset": 0,
00:14:24.354        "data_size": 65536
00:14:24.354      },
00:14:24.354      {
00:14:24.354        "name": "BaseBdev2",
00:14:24.354        "uuid": "b1bbf6f5-65f6-58e5-8469-a4a92049b991",
00:14:24.354        "is_configured": true,
00:14:24.354        "data_offset": 0,
00:14:24.354        "data_size": 65536
00:14:24.354      },
00:14:24.354      {
00:14:24.354        "name": "BaseBdev3",
00:14:24.354        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:24.354        "is_configured": true,
00:14:24.354        "data_offset": 0,
00:14:24.354        "data_size": 65536
00:14:24.354      },
00:14:24.354      {
00:14:24.354        "name": "BaseBdev4",
00:14:24.354        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:24.354        "is_configured": true,
00:14:24.354        "data_offset": 0,
00:14:24.354        "data_size": 65536
00:14:24.354      }
00:14:24.354    ]
00:14:24.354  }'
00:14:24.354   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:24.354   11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:24.614        161.50 IOPS,   484.50 MiB/s
[2024-12-16T11:35:50.681Z]  11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:24.614   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:24.614   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:24.614   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:24.614   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:24.614    11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:24.614    11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:24.614    11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:24.614    11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:24.614    11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:24.614   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:24.614    "name": "raid_bdev1",
00:14:24.614    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:24.614    "strip_size_kb": 0,
00:14:24.614    "state": "online",
00:14:24.614    "raid_level": "raid1",
00:14:24.614    "superblock": false,
00:14:24.614    "num_base_bdevs": 4,
00:14:24.614    "num_base_bdevs_discovered": 3,
00:14:24.614    "num_base_bdevs_operational": 3,
00:14:24.614    "base_bdevs_list": [
00:14:24.614      {
00:14:24.614        "name": null,
00:14:24.614        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:24.614        "is_configured": false,
00:14:24.614        "data_offset": 0,
00:14:24.614        "data_size": 65536
00:14:24.614      },
00:14:24.614      {
00:14:24.614        "name": "BaseBdev2",
00:14:24.614        "uuid": "b1bbf6f5-65f6-58e5-8469-a4a92049b991",
00:14:24.614        "is_configured": true,
00:14:24.614        "data_offset": 0,
00:14:24.614        "data_size": 65536
00:14:24.614      },
00:14:24.614      {
00:14:24.614        "name": "BaseBdev3",
00:14:24.614        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:24.614        "is_configured": true,
00:14:24.615        "data_offset": 0,
00:14:24.615        "data_size": 65536
00:14:24.615      },
00:14:24.615      {
00:14:24.615        "name": "BaseBdev4",
00:14:24.615        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:24.615        "is_configured": true,
00:14:24.615        "data_offset": 0,
00:14:24.615        "data_size": 65536
00:14:24.615      }
00:14:24.615    ]
00:14:24.615  }'
00:14:24.615    11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:24.615   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:24.615    11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:24.615   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:24.615   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:24.615   11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:24.615   11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:24.883  [2024-12-16 11:35:50.682116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:24.883   11:35:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:24.883   11:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1
00:14:24.883  [2024-12-16 11:35:50.733217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:14:24.883  [2024-12-16 11:35:50.735345] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:24.883  [2024-12-16 11:35:50.844401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:14:24.883  [2024-12-16 11:35:50.845793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:14:25.153  [2024-12-16 11:35:51.092314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:25.153  [2024-12-16 11:35:51.093085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:25.722        157.33 IOPS,   472.00 MiB/s
[2024-12-16T11:35:51.789Z] [2024-12-16 11:35:51.499813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:14:25.722  [2024-12-16 11:35:51.618110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:14:25.722  [2024-12-16 11:35:51.618465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:14:25.722   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:25.722   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:25.722   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:25.722   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:25.722   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:25.722    11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:25.722    11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:25.722    11:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:25.722    11:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:25.722    11:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:25.722   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:25.722    "name": "raid_bdev1",
00:14:25.722    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:25.722    "strip_size_kb": 0,
00:14:25.722    "state": "online",
00:14:25.722    "raid_level": "raid1",
00:14:25.722    "superblock": false,
00:14:25.722    "num_base_bdevs": 4,
00:14:25.722    "num_base_bdevs_discovered": 4,
00:14:25.722    "num_base_bdevs_operational": 4,
00:14:25.722    "process": {
00:14:25.722      "type": "rebuild",
00:14:25.722      "target": "spare",
00:14:25.722      "progress": {
00:14:25.722        "blocks": 10240,
00:14:25.722        "percent": 15
00:14:25.722      }
00:14:25.722    },
00:14:25.722    "base_bdevs_list": [
00:14:25.722      {
00:14:25.722        "name": "spare",
00:14:25.722        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:25.722        "is_configured": true,
00:14:25.722        "data_offset": 0,
00:14:25.722        "data_size": 65536
00:14:25.722      },
00:14:25.722      {
00:14:25.722        "name": "BaseBdev2",
00:14:25.722        "uuid": "b1bbf6f5-65f6-58e5-8469-a4a92049b991",
00:14:25.722        "is_configured": true,
00:14:25.722        "data_offset": 0,
00:14:25.722        "data_size": 65536
00:14:25.722      },
00:14:25.722      {
00:14:25.722        "name": "BaseBdev3",
00:14:25.722        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:25.722        "is_configured": true,
00:14:25.722        "data_offset": 0,
00:14:25.722        "data_size": 65536
00:14:25.722      },
00:14:25.722      {
00:14:25.722        "name": "BaseBdev4",
00:14:25.722        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:25.722        "is_configured": true,
00:14:25.722        "data_offset": 0,
00:14:25.722        "data_size": 65536
00:14:25.722      }
00:14:25.722    ]
00:14:25.722  }'
00:14:25.722    11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:25.982    11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:25.982  [2024-12-16 11:35:51.832355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']'
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']'
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:25.982   11:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:25.982  [2024-12-16 11:35:51.866630] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:25.982  [2024-12-16 11:35:52.004768] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080
00:14:25.982  [2024-12-16 11:35:52.004818] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220
00:14:25.982  [2024-12-16 11:35:52.006477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]=
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- ))
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:25.982   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:25.982    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:25.982    11:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:25.982    11:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:25.982    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:25.982    11:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:26.243    "name": "raid_bdev1",
00:14:26.243    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:26.243    "strip_size_kb": 0,
00:14:26.243    "state": "online",
00:14:26.243    "raid_level": "raid1",
00:14:26.243    "superblock": false,
00:14:26.243    "num_base_bdevs": 4,
00:14:26.243    "num_base_bdevs_discovered": 3,
00:14:26.243    "num_base_bdevs_operational": 3,
00:14:26.243    "process": {
00:14:26.243      "type": "rebuild",
00:14:26.243      "target": "spare",
00:14:26.243      "progress": {
00:14:26.243        "blocks": 16384,
00:14:26.243        "percent": 25
00:14:26.243      }
00:14:26.243    },
00:14:26.243    "base_bdevs_list": [
00:14:26.243      {
00:14:26.243        "name": "spare",
00:14:26.243        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:26.243        "is_configured": true,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      },
00:14:26.243      {
00:14:26.243        "name": null,
00:14:26.243        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:26.243        "is_configured": false,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      },
00:14:26.243      {
00:14:26.243        "name": "BaseBdev3",
00:14:26.243        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:26.243        "is_configured": true,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      },
00:14:26.243      {
00:14:26.243        "name": "BaseBdev4",
00:14:26.243        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:26.243        "is_configured": true,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      }
00:14:26.243    ]
00:14:26.243  }'
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=405
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:26.243    "name": "raid_bdev1",
00:14:26.243    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:26.243    "strip_size_kb": 0,
00:14:26.243    "state": "online",
00:14:26.243    "raid_level": "raid1",
00:14:26.243    "superblock": false,
00:14:26.243    "num_base_bdevs": 4,
00:14:26.243    "num_base_bdevs_discovered": 3,
00:14:26.243    "num_base_bdevs_operational": 3,
00:14:26.243    "process": {
00:14:26.243      "type": "rebuild",
00:14:26.243      "target": "spare",
00:14:26.243      "progress": {
00:14:26.243        "blocks": 18432,
00:14:26.243        "percent": 28
00:14:26.243      }
00:14:26.243    },
00:14:26.243    "base_bdevs_list": [
00:14:26.243      {
00:14:26.243        "name": "spare",
00:14:26.243        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:26.243        "is_configured": true,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      },
00:14:26.243      {
00:14:26.243        "name": null,
00:14:26.243        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:26.243        "is_configured": false,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      },
00:14:26.243      {
00:14:26.243        "name": "BaseBdev3",
00:14:26.243        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:26.243        "is_configured": true,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      },
00:14:26.243      {
00:14:26.243        "name": "BaseBdev4",
00:14:26.243        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:26.243        "is_configured": true,
00:14:26.243        "data_offset": 0,
00:14:26.243        "data_size": 65536
00:14:26.243      }
00:14:26.243    ]
00:14:26.243  }'
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:26.243    11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:26.243   11:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:26.502  [2024-12-16 11:35:52.335406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:14:26.502        143.00 IOPS,   429.00 MiB/s
[2024-12-16T11:35:52.569Z] [2024-12-16 11:35:52.544466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:27.441    11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:27.441    11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:27.441    11:35:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:27.441    11:35:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:27.441    11:35:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:27.441    "name": "raid_bdev1",
00:14:27.441    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:27.441    "strip_size_kb": 0,
00:14:27.441    "state": "online",
00:14:27.441    "raid_level": "raid1",
00:14:27.441    "superblock": false,
00:14:27.441    "num_base_bdevs": 4,
00:14:27.441    "num_base_bdevs_discovered": 3,
00:14:27.441    "num_base_bdevs_operational": 3,
00:14:27.441    "process": {
00:14:27.441      "type": "rebuild",
00:14:27.441      "target": "spare",
00:14:27.441      "progress": {
00:14:27.441        "blocks": 36864,
00:14:27.441        "percent": 56
00:14:27.441      }
00:14:27.441    },
00:14:27.441    "base_bdevs_list": [
00:14:27.441      {
00:14:27.441        "name": "spare",
00:14:27.441        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:27.441        "is_configured": true,
00:14:27.441        "data_offset": 0,
00:14:27.441        "data_size": 65536
00:14:27.441      },
00:14:27.441      {
00:14:27.441        "name": null,
00:14:27.441        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:27.441        "is_configured": false,
00:14:27.441        "data_offset": 0,
00:14:27.441        "data_size": 65536
00:14:27.441      },
00:14:27.441      {
00:14:27.441        "name": "BaseBdev3",
00:14:27.441        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:27.441        "is_configured": true,
00:14:27.441        "data_offset": 0,
00:14:27.441        "data_size": 65536
00:14:27.441      },
00:14:27.441      {
00:14:27.441        "name": "BaseBdev4",
00:14:27.441        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:27.441        "is_configured": true,
00:14:27.441        "data_offset": 0,
00:14:27.441        "data_size": 65536
00:14:27.441      }
00:14:27.441    ]
00:14:27.441  }'
00:14:27.441    11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:27.441    11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:27.441   11:35:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:27.441  [2024-12-16 11:35:53.448691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008
00:14:28.010        123.00 IOPS,   369.00 MiB/s
[2024-12-16T11:35:54.077Z] [2024-12-16 11:35:53.882081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:28.579    11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:28.579    11:35:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:28.579    11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:28.579    11:35:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:28.579  [2024-12-16 11:35:54.455215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:14:28.579    11:35:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:28.579        109.67 IOPS,   329.00 MiB/s
[2024-12-16T11:35:54.646Z]  11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:28.579    "name": "raid_bdev1",
00:14:28.579    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:28.579    "strip_size_kb": 0,
00:14:28.579    "state": "online",
00:14:28.579    "raid_level": "raid1",
00:14:28.579    "superblock": false,
00:14:28.579    "num_base_bdevs": 4,
00:14:28.579    "num_base_bdevs_discovered": 3,
00:14:28.579    "num_base_bdevs_operational": 3,
00:14:28.579    "process": {
00:14:28.579      "type": "rebuild",
00:14:28.579      "target": "spare",
00:14:28.579      "progress": {
00:14:28.579        "blocks": 55296,
00:14:28.579        "percent": 84
00:14:28.579      }
00:14:28.579    },
00:14:28.579    "base_bdevs_list": [
00:14:28.579      {
00:14:28.579        "name": "spare",
00:14:28.579        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:28.579        "is_configured": true,
00:14:28.579        "data_offset": 0,
00:14:28.579        "data_size": 65536
00:14:28.579      },
00:14:28.579      {
00:14:28.579        "name": null,
00:14:28.579        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:28.579        "is_configured": false,
00:14:28.579        "data_offset": 0,
00:14:28.579        "data_size": 65536
00:14:28.579      },
00:14:28.579      {
00:14:28.579        "name": "BaseBdev3",
00:14:28.579        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:28.579        "is_configured": true,
00:14:28.579        "data_offset": 0,
00:14:28.579        "data_size": 65536
00:14:28.579      },
00:14:28.579      {
00:14:28.579        "name": "BaseBdev4",
00:14:28.579        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:28.579        "is_configured": true,
00:14:28.579        "data_offset": 0,
00:14:28.579        "data_size": 65536
00:14:28.579      }
00:14:28.579    ]
00:14:28.579  }'
00:14:28.579    11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:28.579    11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:28.579   11:35:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:28.838  [2024-12-16 11:35:54.663938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440
00:14:29.097  [2024-12-16 11:35:55.092847] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:14:29.356  [2024-12-16 11:35:55.198177] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:14:29.356  [2024-12-16 11:35:55.200500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:29.615        100.00 IOPS,   300.00 MiB/s
[2024-12-16T11:35:55.682Z]  11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:29.615   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:29.615   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:29.615   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:29.615   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:29.615   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:29.615    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:29.615    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:29.615    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:29.615    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:29.615    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:29.615   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:29.615    "name": "raid_bdev1",
00:14:29.615    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:29.615    "strip_size_kb": 0,
00:14:29.615    "state": "online",
00:14:29.615    "raid_level": "raid1",
00:14:29.615    "superblock": false,
00:14:29.615    "num_base_bdevs": 4,
00:14:29.615    "num_base_bdevs_discovered": 3,
00:14:29.615    "num_base_bdevs_operational": 3,
00:14:29.615    "base_bdevs_list": [
00:14:29.615      {
00:14:29.615        "name": "spare",
00:14:29.615        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:29.615        "is_configured": true,
00:14:29.615        "data_offset": 0,
00:14:29.615        "data_size": 65536
00:14:29.615      },
00:14:29.615      {
00:14:29.615        "name": null,
00:14:29.615        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:29.615        "is_configured": false,
00:14:29.615        "data_offset": 0,
00:14:29.615        "data_size": 65536
00:14:29.615      },
00:14:29.615      {
00:14:29.615        "name": "BaseBdev3",
00:14:29.615        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:29.615        "is_configured": true,
00:14:29.615        "data_offset": 0,
00:14:29.615        "data_size": 65536
00:14:29.615      },
00:14:29.615      {
00:14:29.615        "name": "BaseBdev4",
00:14:29.615        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:29.615        "is_configured": true,
00:14:29.615        "data_offset": 0,
00:14:29.615        "data_size": 65536
00:14:29.615      }
00:14:29.615    ]
00:14:29.615  }'
00:14:29.615    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:29.615   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:14:29.615    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:29.875   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:14:29.875   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break
00:14:29.875   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:29.875   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:29.876    "name": "raid_bdev1",
00:14:29.876    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:29.876    "strip_size_kb": 0,
00:14:29.876    "state": "online",
00:14:29.876    "raid_level": "raid1",
00:14:29.876    "superblock": false,
00:14:29.876    "num_base_bdevs": 4,
00:14:29.876    "num_base_bdevs_discovered": 3,
00:14:29.876    "num_base_bdevs_operational": 3,
00:14:29.876    "base_bdevs_list": [
00:14:29.876      {
00:14:29.876        "name": "spare",
00:14:29.876        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:29.876        "is_configured": true,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      },
00:14:29.876      {
00:14:29.876        "name": null,
00:14:29.876        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:29.876        "is_configured": false,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      },
00:14:29.876      {
00:14:29.876        "name": "BaseBdev3",
00:14:29.876        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:29.876        "is_configured": true,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      },
00:14:29.876      {
00:14:29.876        "name": "BaseBdev4",
00:14:29.876        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:29.876        "is_configured": true,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      }
00:14:29.876    ]
00:14:29.876  }'
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:29.876    11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:29.876    "name": "raid_bdev1",
00:14:29.876    "uuid": "d3f61374-b24f-47ff-91e1-f0809a7b082b",
00:14:29.876    "strip_size_kb": 0,
00:14:29.876    "state": "online",
00:14:29.876    "raid_level": "raid1",
00:14:29.876    "superblock": false,
00:14:29.876    "num_base_bdevs": 4,
00:14:29.876    "num_base_bdevs_discovered": 3,
00:14:29.876    "num_base_bdevs_operational": 3,
00:14:29.876    "base_bdevs_list": [
00:14:29.876      {
00:14:29.876        "name": "spare",
00:14:29.876        "uuid": "eef547ea-04e7-5a77-b6d7-88374b8df646",
00:14:29.876        "is_configured": true,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      },
00:14:29.876      {
00:14:29.876        "name": null,
00:14:29.876        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:29.876        "is_configured": false,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      },
00:14:29.876      {
00:14:29.876        "name": "BaseBdev3",
00:14:29.876        "uuid": "2632bec5-9b36-5904-8407-855e0ccebcfe",
00:14:29.876        "is_configured": true,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      },
00:14:29.876      {
00:14:29.876        "name": "BaseBdev4",
00:14:29.876        "uuid": "f8b90a5f-c725-524b-866a-cbb2aa5a2fb1",
00:14:29.876        "is_configured": true,
00:14:29.876        "data_offset": 0,
00:14:29.876        "data_size": 65536
00:14:29.876      }
00:14:29.876    ]
00:14:29.876  }'
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:29.876   11:35:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:30.445   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:14:30.445   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:30.445   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:30.445  [2024-12-16 11:35:56.287849] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:30.445  [2024-12-16 11:35:56.287892] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:30.445  
00:14:30.445                                                                                                  Latency(us)
00:14:30.445  
[2024-12-16T11:35:56.512Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:30.445  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:14:30.445  	 raid_bdev1          :       7.92      93.96     281.87       0.00     0.00   15925.08     302.28  123631.23
00:14:30.445  
[2024-12-16T11:35:56.512Z]  ===================================================================================================================
00:14:30.445  
[2024-12-16T11:35:56.512Z]  Total                       :                 93.96     281.87       0.00     0.00   15925.08     302.28  123631.23
00:14:30.445  [2024-12-16 11:35:56.387939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:30.445  [2024-12-16 11:35:56.387992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:30.445  [2024-12-16 11:35:56.388109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:30.445  [2024-12-16 11:35:56.388138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:14:30.445  {
00:14:30.445    "results": [
00:14:30.445      {
00:14:30.445        "job": "raid_bdev1",
00:14:30.445        "core_mask": "0x1",
00:14:30.445        "workload": "randrw",
00:14:30.445        "percentage": 50,
00:14:30.445        "status": "finished",
00:14:30.445        "queue_depth": 2,
00:14:30.445        "io_size": 3145728,
00:14:30.445        "runtime": 7.918641,
00:14:30.445        "iops": 93.95551585177306,
00:14:30.445        "mibps": 281.8665475553192,
00:14:30.445        "io_failed": 0,
00:14:30.445        "io_timeout": 0,
00:14:30.445        "avg_latency_us": 15925.084810067146,
00:14:30.445        "min_latency_us": 302.2812227074236,
00:14:30.445        "max_latency_us": 123631.23144104803
00:14:30.445      }
00:14:30.445    ],
00:14:30.445    "core_count": 1
00:14:30.445  }
00:14:30.445   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:30.445    11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:30.445    11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:30.445    11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length
00:14:30.445    11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:30.445    11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']'
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:30.446   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0
00:14:30.706  /dev/nbd0
00:14:30.706    11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:30.706  1+0 records in
00:14:30.706  1+0 records out
00:14:30.706  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447859 s, 9.1 MB/s
00:14:30.706    11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']'
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']'
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3')
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:30.706   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1
00:14:30.967  /dev/nbd1
00:14:30.967    11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:30.967  1+0 records in
00:14:30.967  1+0 records out
00:14:30.967  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257925 s, 15.9 MB/s
00:14:30.967    11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:30.967   11:35:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:14:31.229   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1
00:14:31.229   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:31.229   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:14:31.229   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:31.229   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i
00:14:31.229   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:31.229   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:14:31.491    11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']'
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4')
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1
00:14:31.491  /dev/nbd1
00:14:31.491    11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:31.491   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:31.751  1+0 records in
00:14:31.751  1+0 records out
00:14:31.751  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222957 s, 18.4 MB/s
00:14:31.751    11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:31.751   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:14:32.010    11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:32.010   11:35:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:14:32.010    11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:32.010   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:32.010   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:32.010   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:32.010   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:32.010   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']'
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89716
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89716 ']'
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89716
00:14:32.270    11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:14:32.270    11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89716
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:14:32.270  killing process with pid 89716
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89716'
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89716
00:14:32.270  Received shutdown signal, test time was about 9.659234 seconds
00:14:32.270  
00:14:32.270                                                                                                  Latency(us)
00:14:32.270  
[2024-12-16T11:35:58.337Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:32.270  
[2024-12-16T11:35:58.337Z]  ===================================================================================================================
00:14:32.270  
[2024-12-16T11:35:58.337Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:14:32.270  [2024-12-16 11:35:58.120856] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:32.270   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89716
00:14:32.270  [2024-12-16 11:35:58.166238] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0
00:14:32.530  
00:14:32.530  real	0m11.714s
00:14:32.530  user	0m15.268s
00:14:32.530  sys	0m1.828s
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x
00:14:32.530  ************************************
00:14:32.530  END TEST raid_rebuild_test_io
00:14:32.530  ************************************
00:14:32.530   11:35:58 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true
00:14:32.530   11:35:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:14:32.530   11:35:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:14:32.530   11:35:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:14:32.530  ************************************
00:14:32.530  START TEST raid_rebuild_test_sb_io
00:14:32.530  ************************************
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true
00:14:32.530   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true
00:14:32.530    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:14:32.530    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:32.530    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:14:32.530    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:14:32.531    11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90114
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90114
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 90114 ']'
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:32.531  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable
00:14:32.531   11:35:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:32.531  I/O size of 3145728 is greater than zero copy threshold (65536).
00:14:32.531  Zero copy mechanism will not be used.
00:14:32.531  [2024-12-16 11:35:58.568278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:14:32.531  [2024-12-16 11:35:58.568396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90114 ]
00:14:32.791  [2024-12-16 11:35:58.728075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:32.791  [2024-12-16 11:35:58.772128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:14:32.791  [2024-12-16 11:35:58.814108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:32.791  [2024-12-16 11:35:58.814158] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:33.360   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:14:33.360   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0
00:14:33.360   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:33.360   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:14:33.361   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.361   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.361  BaseBdev1_malloc
00:14:33.361   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.361   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:14:33.361   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.361   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.361  [2024-12-16 11:35:59.424403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:14:33.361  [2024-12-16 11:35:59.424483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:33.361  [2024-12-16 11:35:59.424523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:14:33.361  [2024-12-16 11:35:59.424537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:33.621  [2024-12-16 11:35:59.426723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:33.621  [2024-12-16 11:35:59.426759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:14:33.621  BaseBdev1
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  BaseBdev2_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  [2024-12-16 11:35:59.474155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:14:33.621  [2024-12-16 11:35:59.474221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:33.621  [2024-12-16 11:35:59.474252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:14:33.621  [2024-12-16 11:35:59.474266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:33.621  [2024-12-16 11:35:59.477347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:33.621  [2024-12-16 11:35:59.477389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:14:33.621  BaseBdev2
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  BaseBdev3_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  [2024-12-16 11:35:59.503350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:14:33.621  [2024-12-16 11:35:59.503416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:33.621  [2024-12-16 11:35:59.503444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:14:33.621  [2024-12-16 11:35:59.503455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:33.621  [2024-12-16 11:35:59.505955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:33.621  [2024-12-16 11:35:59.505987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:14:33.621  BaseBdev3
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  BaseBdev4_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  [2024-12-16 11:35:59.532426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:14:33.621  [2024-12-16 11:35:59.532488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:33.621  [2024-12-16 11:35:59.532528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:14:33.621  [2024-12-16 11:35:59.532539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:33.621  [2024-12-16 11:35:59.534957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:33.621  [2024-12-16 11:35:59.534991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:14:33.621  BaseBdev4
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  spare_malloc
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  spare_delay
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  [2024-12-16 11:35:59.573228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:14:33.621  [2024-12-16 11:35:59.573278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:33.621  [2024-12-16 11:35:59.573299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:14:33.621  [2024-12-16 11:35:59.573308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:33.621  [2024-12-16 11:35:59.575399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:33.621  [2024-12-16 11:35:59.575433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:14:33.621  spare
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.621  [2024-12-16 11:35:59.585294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:33.621  [2024-12-16 11:35:59.587102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:33.621  [2024-12-16 11:35:59.587171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:33.621  [2024-12-16 11:35:59.587213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:14:33.621  [2024-12-16 11:35:59.587434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:14:33.621  [2024-12-16 11:35:59.587451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:33.621  [2024-12-16 11:35:59.587746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:14:33.621  [2024-12-16 11:35:59.587915] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:14:33.621  [2024-12-16 11:35:59.587937] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:14:33.621  [2024-12-16 11:35:59.588088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:33.621   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:33.621    11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:33.621    11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:33.621    11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:33.621    11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:33.622    11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:33.622   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:33.622    "name": "raid_bdev1",
00:14:33.622    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:33.622    "strip_size_kb": 0,
00:14:33.622    "state": "online",
00:14:33.622    "raid_level": "raid1",
00:14:33.622    "superblock": true,
00:14:33.622    "num_base_bdevs": 4,
00:14:33.622    "num_base_bdevs_discovered": 4,
00:14:33.622    "num_base_bdevs_operational": 4,
00:14:33.622    "base_bdevs_list": [
00:14:33.622      {
00:14:33.622        "name": "BaseBdev1",
00:14:33.622        "uuid": "42dd6dea-cb16-5d0f-93fc-a97db73146ef",
00:14:33.622        "is_configured": true,
00:14:33.622        "data_offset": 2048,
00:14:33.622        "data_size": 63488
00:14:33.622      },
00:14:33.622      {
00:14:33.622        "name": "BaseBdev2",
00:14:33.622        "uuid": "4cf31e10-0f84-5ac3-b7d8-36f8a6e3c08f",
00:14:33.622        "is_configured": true,
00:14:33.622        "data_offset": 2048,
00:14:33.622        "data_size": 63488
00:14:33.622      },
00:14:33.622      {
00:14:33.622        "name": "BaseBdev3",
00:14:33.622        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:33.622        "is_configured": true,
00:14:33.622        "data_offset": 2048,
00:14:33.622        "data_size": 63488
00:14:33.622      },
00:14:33.622      {
00:14:33.622        "name": "BaseBdev4",
00:14:33.622        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:33.622        "is_configured": true,
00:14:33.622        "data_offset": 2048,
00:14:33.622        "data_size": 63488
00:14:33.622      }
00:14:33.622    ]
00:14:33.622  }'
00:14:33.622   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:33.622   11:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:34.190  [2024-12-16 11:36:00.048878] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']'
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:34.190  [2024-12-16 11:36:00.148324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:34.190    11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:34.190    "name": "raid_bdev1",
00:14:34.190    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:34.190    "strip_size_kb": 0,
00:14:34.190    "state": "online",
00:14:34.190    "raid_level": "raid1",
00:14:34.190    "superblock": true,
00:14:34.190    "num_base_bdevs": 4,
00:14:34.190    "num_base_bdevs_discovered": 3,
00:14:34.190    "num_base_bdevs_operational": 3,
00:14:34.190    "base_bdevs_list": [
00:14:34.190      {
00:14:34.190        "name": null,
00:14:34.190        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:34.190        "is_configured": false,
00:14:34.190        "data_offset": 0,
00:14:34.190        "data_size": 63488
00:14:34.190      },
00:14:34.190      {
00:14:34.190        "name": "BaseBdev2",
00:14:34.190        "uuid": "4cf31e10-0f84-5ac3-b7d8-36f8a6e3c08f",
00:14:34.190        "is_configured": true,
00:14:34.190        "data_offset": 2048,
00:14:34.190        "data_size": 63488
00:14:34.190      },
00:14:34.190      {
00:14:34.190        "name": "BaseBdev3",
00:14:34.190        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:34.190        "is_configured": true,
00:14:34.190        "data_offset": 2048,
00:14:34.190        "data_size": 63488
00:14:34.190      },
00:14:34.190      {
00:14:34.190        "name": "BaseBdev4",
00:14:34.190        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:34.190        "is_configured": true,
00:14:34.190        "data_offset": 2048,
00:14:34.190        "data_size": 63488
00:14:34.190      }
00:14:34.190    ]
00:14:34.190  }'
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:34.190   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:34.190  [2024-12-16 11:36:00.230302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:14:34.190  I/O size of 3145728 is greater than zero copy threshold (65536).
00:14:34.190  Zero copy mechanism will not be used.
00:14:34.190  Running I/O for 60 seconds...
00:14:34.759   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:34.759   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:34.759   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:34.759  [2024-12-16 11:36:00.607320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:34.759   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:34.759   11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1
00:14:34.759  [2024-12-16 11:36:00.667156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150
00:14:34.759  [2024-12-16 11:36:00.669350] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:34.759  [2024-12-16 11:36:00.819841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:14:35.018  [2024-12-16 11:36:01.029916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:35.018  [2024-12-16 11:36:01.030534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:35.536        162.00 IOPS,   486.00 MiB/s
[2024-12-16T11:36:01.603Z] [2024-12-16 11:36:01.480347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:35.795    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:35.795    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:35.795    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:35.795    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:35.795    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:35.795    "name": "raid_bdev1",
00:14:35.795    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:35.795    "strip_size_kb": 0,
00:14:35.795    "state": "online",
00:14:35.795    "raid_level": "raid1",
00:14:35.795    "superblock": true,
00:14:35.795    "num_base_bdevs": 4,
00:14:35.795    "num_base_bdevs_discovered": 4,
00:14:35.795    "num_base_bdevs_operational": 4,
00:14:35.795    "process": {
00:14:35.795      "type": "rebuild",
00:14:35.795      "target": "spare",
00:14:35.795      "progress": {
00:14:35.795        "blocks": 10240,
00:14:35.795        "percent": 16
00:14:35.795      }
00:14:35.795    },
00:14:35.795    "base_bdevs_list": [
00:14:35.795      {
00:14:35.795        "name": "spare",
00:14:35.795        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:35.795        "is_configured": true,
00:14:35.795        "data_offset": 2048,
00:14:35.795        "data_size": 63488
00:14:35.795      },
00:14:35.795      {
00:14:35.795        "name": "BaseBdev2",
00:14:35.795        "uuid": "4cf31e10-0f84-5ac3-b7d8-36f8a6e3c08f",
00:14:35.795        "is_configured": true,
00:14:35.795        "data_offset": 2048,
00:14:35.795        "data_size": 63488
00:14:35.795      },
00:14:35.795      {
00:14:35.795        "name": "BaseBdev3",
00:14:35.795        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:35.795        "is_configured": true,
00:14:35.795        "data_offset": 2048,
00:14:35.795        "data_size": 63488
00:14:35.795      },
00:14:35.795      {
00:14:35.795        "name": "BaseBdev4",
00:14:35.795        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:35.795        "is_configured": true,
00:14:35.795        "data_offset": 2048,
00:14:35.795        "data_size": 63488
00:14:35.795      }
00:14:35.795    ]
00:14:35.795  }'
00:14:35.795    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:35.795    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:35.795   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:35.795  [2024-12-16 11:36:01.793320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:36.054  [2024-12-16 11:36:01.899058] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:14:36.054  [2024-12-16 11:36:01.902501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:36.054  [2024-12-16 11:36:01.902564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:36.054  [2024-12-16 11:36:01.902578] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:14:36.054  [2024-12-16 11:36:01.908600] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:36.054    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:36.054    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:36.054    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:36.054    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:36.054    11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:36.054   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:36.054    "name": "raid_bdev1",
00:14:36.054    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:36.054    "strip_size_kb": 0,
00:14:36.055    "state": "online",
00:14:36.055    "raid_level": "raid1",
00:14:36.055    "superblock": true,
00:14:36.055    "num_base_bdevs": 4,
00:14:36.055    "num_base_bdevs_discovered": 3,
00:14:36.055    "num_base_bdevs_operational": 3,
00:14:36.055    "base_bdevs_list": [
00:14:36.055      {
00:14:36.055        "name": null,
00:14:36.055        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:36.055        "is_configured": false,
00:14:36.055        "data_offset": 0,
00:14:36.055        "data_size": 63488
00:14:36.055      },
00:14:36.055      {
00:14:36.055        "name": "BaseBdev2",
00:14:36.055        "uuid": "4cf31e10-0f84-5ac3-b7d8-36f8a6e3c08f",
00:14:36.055        "is_configured": true,
00:14:36.055        "data_offset": 2048,
00:14:36.055        "data_size": 63488
00:14:36.055      },
00:14:36.055      {
00:14:36.055        "name": "BaseBdev3",
00:14:36.055        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:36.055        "is_configured": true,
00:14:36.055        "data_offset": 2048,
00:14:36.055        "data_size": 63488
00:14:36.055      },
00:14:36.055      {
00:14:36.055        "name": "BaseBdev4",
00:14:36.055        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:36.055        "is_configured": true,
00:14:36.055        "data_offset": 2048,
00:14:36.055        "data_size": 63488
00:14:36.055      }
00:14:36.055    ]
00:14:36.055  }'
00:14:36.055   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:36.055   11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:36.575        148.50 IOPS,   445.50 MiB/s
[2024-12-16T11:36:02.642Z]  11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:36.575   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:36.576    11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:36.576    11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:36.576    11:36:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:36.576    11:36:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:36.576    11:36:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:36.576    "name": "raid_bdev1",
00:14:36.576    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:36.576    "strip_size_kb": 0,
00:14:36.576    "state": "online",
00:14:36.576    "raid_level": "raid1",
00:14:36.576    "superblock": true,
00:14:36.576    "num_base_bdevs": 4,
00:14:36.576    "num_base_bdevs_discovered": 3,
00:14:36.576    "num_base_bdevs_operational": 3,
00:14:36.576    "base_bdevs_list": [
00:14:36.576      {
00:14:36.576        "name": null,
00:14:36.576        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:36.576        "is_configured": false,
00:14:36.576        "data_offset": 0,
00:14:36.576        "data_size": 63488
00:14:36.576      },
00:14:36.576      {
00:14:36.576        "name": "BaseBdev2",
00:14:36.576        "uuid": "4cf31e10-0f84-5ac3-b7d8-36f8a6e3c08f",
00:14:36.576        "is_configured": true,
00:14:36.576        "data_offset": 2048,
00:14:36.576        "data_size": 63488
00:14:36.576      },
00:14:36.576      {
00:14:36.576        "name": "BaseBdev3",
00:14:36.576        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:36.576        "is_configured": true,
00:14:36.576        "data_offset": 2048,
00:14:36.576        "data_size": 63488
00:14:36.576      },
00:14:36.576      {
00:14:36.576        "name": "BaseBdev4",
00:14:36.576        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:36.576        "is_configured": true,
00:14:36.576        "data_offset": 2048,
00:14:36.576        "data_size": 63488
00:14:36.576      }
00:14:36.576    ]
00:14:36.576  }'
00:14:36.576    11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:36.576    11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:36.576  [2024-12-16 11:36:02.559565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:36.576   11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1
00:14:36.576  [2024-12-16 11:36:02.619672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:14:36.576  [2024-12-16 11:36:02.621711] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:36.863  [2024-12-16 11:36:02.743321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:14:36.863  [2024-12-16 11:36:02.744556] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:14:37.122  [2024-12-16 11:36:02.955590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:37.122  [2024-12-16 11:36:02.955882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:14:37.381        160.33 IOPS,   481.00 MiB/s
[2024-12-16T11:36:03.448Z] [2024-12-16 11:36:03.351286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:14:37.381  [2024-12-16 11:36:03.351953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:14:37.640   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:37.640   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:37.640   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:37.640   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:37.640   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:37.640    11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:37.640    11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:37.640    11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:37.640    11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:37.640    11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:37.640   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:37.640    "name": "raid_bdev1",
00:14:37.640    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:37.640    "strip_size_kb": 0,
00:14:37.640    "state": "online",
00:14:37.640    "raid_level": "raid1",
00:14:37.640    "superblock": true,
00:14:37.640    "num_base_bdevs": 4,
00:14:37.640    "num_base_bdevs_discovered": 4,
00:14:37.640    "num_base_bdevs_operational": 4,
00:14:37.640    "process": {
00:14:37.640      "type": "rebuild",
00:14:37.640      "target": "spare",
00:14:37.640      "progress": {
00:14:37.640        "blocks": 12288,
00:14:37.640        "percent": 19
00:14:37.640      }
00:14:37.640    },
00:14:37.640    "base_bdevs_list": [
00:14:37.640      {
00:14:37.640        "name": "spare",
00:14:37.640        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:37.640        "is_configured": true,
00:14:37.640        "data_offset": 2048,
00:14:37.640        "data_size": 63488
00:14:37.640      },
00:14:37.640      {
00:14:37.640        "name": "BaseBdev2",
00:14:37.640        "uuid": "4cf31e10-0f84-5ac3-b7d8-36f8a6e3c08f",
00:14:37.640        "is_configured": true,
00:14:37.640        "data_offset": 2048,
00:14:37.640        "data_size": 63488
00:14:37.640      },
00:14:37.640      {
00:14:37.640        "name": "BaseBdev3",
00:14:37.640        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:37.640        "is_configured": true,
00:14:37.640        "data_offset": 2048,
00:14:37.640        "data_size": 63488
00:14:37.640      },
00:14:37.640      {
00:14:37.640        "name": "BaseBdev4",
00:14:37.640        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:37.640        "is_configured": true,
00:14:37.640        "data_offset": 2048,
00:14:37.640        "data_size": 63488
00:14:37.640      }
00:14:37.640    ]
00:14:37.640  }'
00:14:37.640    11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:37.640  [2024-12-16 11:36:03.678845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:14:37.640  [2024-12-16 11:36:03.679392] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:14:37.640   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:37.640    11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:14:37.899  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']'
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:37.899   11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:37.899  [2024-12-16 11:36:03.730665] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:37.899  [2024-12-16 11:36:03.896747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:14:38.158  [2024-12-16 11:36:04.098747] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080
00:14:38.158  [2024-12-16 11:36:04.098796] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]=
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- ))
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:38.158    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:38.158    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:38.158    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:38.158    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:38.158    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:38.158   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:38.158    "name": "raid_bdev1",
00:14:38.158    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:38.158    "strip_size_kb": 0,
00:14:38.158    "state": "online",
00:14:38.158    "raid_level": "raid1",
00:14:38.158    "superblock": true,
00:14:38.158    "num_base_bdevs": 4,
00:14:38.158    "num_base_bdevs_discovered": 3,
00:14:38.158    "num_base_bdevs_operational": 3,
00:14:38.158    "process": {
00:14:38.158      "type": "rebuild",
00:14:38.158      "target": "spare",
00:14:38.158      "progress": {
00:14:38.158        "blocks": 16384,
00:14:38.158        "percent": 25
00:14:38.158      }
00:14:38.158    },
00:14:38.158    "base_bdevs_list": [
00:14:38.158      {
00:14:38.158        "name": "spare",
00:14:38.158        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:38.158        "is_configured": true,
00:14:38.158        "data_offset": 2048,
00:14:38.158        "data_size": 63488
00:14:38.158      },
00:14:38.158      {
00:14:38.158        "name": null,
00:14:38.158        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:38.158        "is_configured": false,
00:14:38.158        "data_offset": 0,
00:14:38.158        "data_size": 63488
00:14:38.158      },
00:14:38.158      {
00:14:38.159        "name": "BaseBdev3",
00:14:38.159        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:38.159        "is_configured": true,
00:14:38.159        "data_offset": 2048,
00:14:38.159        "data_size": 63488
00:14:38.159      },
00:14:38.159      {
00:14:38.159        "name": "BaseBdev4",
00:14:38.159        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:38.159        "is_configured": true,
00:14:38.159        "data_offset": 2048,
00:14:38.159        "data_size": 63488
00:14:38.159      }
00:14:38.159    ]
00:14:38.159  }'
00:14:38.159    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:38.159   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:38.159    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:38.505        131.25 IOPS,   393.75 MiB/s
[2024-12-16T11:36:04.573Z]  11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=417
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:38.506    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:38.506    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:38.506    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:38.506    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:38.506    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:38.506    "name": "raid_bdev1",
00:14:38.506    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:38.506    "strip_size_kb": 0,
00:14:38.506    "state": "online",
00:14:38.506    "raid_level": "raid1",
00:14:38.506    "superblock": true,
00:14:38.506    "num_base_bdevs": 4,
00:14:38.506    "num_base_bdevs_discovered": 3,
00:14:38.506    "num_base_bdevs_operational": 3,
00:14:38.506    "process": {
00:14:38.506      "type": "rebuild",
00:14:38.506      "target": "spare",
00:14:38.506      "progress": {
00:14:38.506        "blocks": 18432,
00:14:38.506        "percent": 29
00:14:38.506      }
00:14:38.506    },
00:14:38.506    "base_bdevs_list": [
00:14:38.506      {
00:14:38.506        "name": "spare",
00:14:38.506        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:38.506        "is_configured": true,
00:14:38.506        "data_offset": 2048,
00:14:38.506        "data_size": 63488
00:14:38.506      },
00:14:38.506      {
00:14:38.506        "name": null,
00:14:38.506        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:38.506        "is_configured": false,
00:14:38.506        "data_offset": 0,
00:14:38.506        "data_size": 63488
00:14:38.506      },
00:14:38.506      {
00:14:38.506        "name": "BaseBdev3",
00:14:38.506        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:38.506        "is_configured": true,
00:14:38.506        "data_offset": 2048,
00:14:38.506        "data_size": 63488
00:14:38.506      },
00:14:38.506      {
00:14:38.506        "name": "BaseBdev4",
00:14:38.506        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:38.506        "is_configured": true,
00:14:38.506        "data_offset": 2048,
00:14:38.506        "data_size": 63488
00:14:38.506      }
00:14:38.506    ]
00:14:38.506  }'
00:14:38.506    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:38.506    11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:38.506   11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:38.506  [2024-12-16 11:36:04.475585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:14:39.074  [2024-12-16 11:36:04.831880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:14:39.340        114.60 IOPS,   343.80 MiB/s
[2024-12-16T11:36:05.407Z] [2024-12-16 11:36:05.288955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864
00:14:39.340   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:39.340   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:39.340   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:39.340   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:39.340   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:39.340   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:39.340    11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:39.340    11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:39.340    11:36:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:39.340    11:36:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:39.603  [2024-12-16 11:36:05.419819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:14:39.603    11:36:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:39.603   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:39.603    "name": "raid_bdev1",
00:14:39.603    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:39.603    "strip_size_kb": 0,
00:14:39.603    "state": "online",
00:14:39.603    "raid_level": "raid1",
00:14:39.603    "superblock": true,
00:14:39.603    "num_base_bdevs": 4,
00:14:39.603    "num_base_bdevs_discovered": 3,
00:14:39.603    "num_base_bdevs_operational": 3,
00:14:39.603    "process": {
00:14:39.603      "type": "rebuild",
00:14:39.603      "target": "spare",
00:14:39.603      "progress": {
00:14:39.603        "blocks": 32768,
00:14:39.603        "percent": 51
00:14:39.603      }
00:14:39.603    },
00:14:39.603    "base_bdevs_list": [
00:14:39.603      {
00:14:39.603        "name": "spare",
00:14:39.603        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:39.603        "is_configured": true,
00:14:39.603        "data_offset": 2048,
00:14:39.603        "data_size": 63488
00:14:39.603      },
00:14:39.603      {
00:14:39.603        "name": null,
00:14:39.603        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:39.603        "is_configured": false,
00:14:39.603        "data_offset": 0,
00:14:39.603        "data_size": 63488
00:14:39.603      },
00:14:39.603      {
00:14:39.603        "name": "BaseBdev3",
00:14:39.603        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:39.603        "is_configured": true,
00:14:39.603        "data_offset": 2048,
00:14:39.603        "data_size": 63488
00:14:39.603      },
00:14:39.603      {
00:14:39.603        "name": "BaseBdev4",
00:14:39.603        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:39.603        "is_configured": true,
00:14:39.603        "data_offset": 2048,
00:14:39.603        "data_size": 63488
00:14:39.603      }
00:14:39.603    ]
00:14:39.603  }'
00:14:39.603    11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:39.603   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:39.603    11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:39.603   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:39.603   11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:40.431        105.50 IOPS,   316.50 MiB/s
[2024-12-16T11:36:06.498Z] [2024-12-16 11:36:06.410320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:40.691    11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:40.691    11:36:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:40.691    11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:40.691    11:36:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:40.691    11:36:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:40.691    "name": "raid_bdev1",
00:14:40.691    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:40.691    "strip_size_kb": 0,
00:14:40.691    "state": "online",
00:14:40.691    "raid_level": "raid1",
00:14:40.691    "superblock": true,
00:14:40.691    "num_base_bdevs": 4,
00:14:40.691    "num_base_bdevs_discovered": 3,
00:14:40.691    "num_base_bdevs_operational": 3,
00:14:40.691    "process": {
00:14:40.691      "type": "rebuild",
00:14:40.691      "target": "spare",
00:14:40.691      "progress": {
00:14:40.691        "blocks": 53248,
00:14:40.691        "percent": 83
00:14:40.691      }
00:14:40.691    },
00:14:40.691    "base_bdevs_list": [
00:14:40.691      {
00:14:40.691        "name": "spare",
00:14:40.691        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:40.691        "is_configured": true,
00:14:40.691        "data_offset": 2048,
00:14:40.691        "data_size": 63488
00:14:40.691      },
00:14:40.691      {
00:14:40.691        "name": null,
00:14:40.691        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:40.691        "is_configured": false,
00:14:40.691        "data_offset": 0,
00:14:40.691        "data_size": 63488
00:14:40.691      },
00:14:40.691      {
00:14:40.691        "name": "BaseBdev3",
00:14:40.691        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:40.691        "is_configured": true,
00:14:40.691        "data_offset": 2048,
00:14:40.691        "data_size": 63488
00:14:40.691      },
00:14:40.691      {
00:14:40.691        "name": "BaseBdev4",
00:14:40.691        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:40.691        "is_configured": true,
00:14:40.691        "data_offset": 2048,
00:14:40.691        "data_size": 63488
00:14:40.691      }
00:14:40.691    ]
00:14:40.691  }'
00:14:40.691    11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:40.691    11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:40.691   11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1
00:14:40.691  [2024-12-16 11:36:06.734358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:14:41.261  [2024-12-16 11:36:07.066047] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:14:41.261  [2024-12-16 11:36:07.165826] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:14:41.261  [2024-12-16 11:36:07.168269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:41.830         94.71 IOPS,   284.14 MiB/s
[2024-12-16T11:36:07.897Z]  11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:14:41.830   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:41.830   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:41.830   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:41.830   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:41.830   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:41.830    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:41.830    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:41.830    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:41.830    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:41.830    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:41.830   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:41.830    "name": "raid_bdev1",
00:14:41.830    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:41.830    "strip_size_kb": 0,
00:14:41.830    "state": "online",
00:14:41.830    "raid_level": "raid1",
00:14:41.830    "superblock": true,
00:14:41.830    "num_base_bdevs": 4,
00:14:41.830    "num_base_bdevs_discovered": 3,
00:14:41.830    "num_base_bdevs_operational": 3,
00:14:41.830    "base_bdevs_list": [
00:14:41.830      {
00:14:41.830        "name": "spare",
00:14:41.830        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:41.830        "is_configured": true,
00:14:41.830        "data_offset": 2048,
00:14:41.830        "data_size": 63488
00:14:41.830      },
00:14:41.830      {
00:14:41.830        "name": null,
00:14:41.830        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:41.830        "is_configured": false,
00:14:41.830        "data_offset": 0,
00:14:41.830        "data_size": 63488
00:14:41.830      },
00:14:41.830      {
00:14:41.830        "name": "BaseBdev3",
00:14:41.830        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:41.830        "is_configured": true,
00:14:41.830        "data_offset": 2048,
00:14:41.830        "data_size": 63488
00:14:41.830      },
00:14:41.830      {
00:14:41.830        "name": "BaseBdev4",
00:14:41.830        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:41.831        "is_configured": true,
00:14:41.831        "data_offset": 2048,
00:14:41.831        "data_size": 63488
00:14:41.831      }
00:14:41.831    ]
00:14:41.831  }'
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:41.831   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:41.831    "name": "raid_bdev1",
00:14:41.831    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:41.831    "strip_size_kb": 0,
00:14:41.831    "state": "online",
00:14:41.831    "raid_level": "raid1",
00:14:41.831    "superblock": true,
00:14:41.831    "num_base_bdevs": 4,
00:14:41.831    "num_base_bdevs_discovered": 3,
00:14:41.831    "num_base_bdevs_operational": 3,
00:14:41.831    "base_bdevs_list": [
00:14:41.831      {
00:14:41.831        "name": "spare",
00:14:41.831        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:41.831        "is_configured": true,
00:14:41.831        "data_offset": 2048,
00:14:41.831        "data_size": 63488
00:14:41.831      },
00:14:41.831      {
00:14:41.831        "name": null,
00:14:41.831        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:41.831        "is_configured": false,
00:14:41.831        "data_offset": 0,
00:14:41.831        "data_size": 63488
00:14:41.831      },
00:14:41.831      {
00:14:41.831        "name": "BaseBdev3",
00:14:41.831        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:41.831        "is_configured": true,
00:14:41.831        "data_offset": 2048,
00:14:41.831        "data_size": 63488
00:14:41.831      },
00:14:41.831      {
00:14:41.831        "name": "BaseBdev4",
00:14:41.831        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:41.831        "is_configured": true,
00:14:41.831        "data_offset": 2048,
00:14:41.831        "data_size": 63488
00:14:41.831      }
00:14:41.831    ]
00:14:41.831  }'
00:14:41.831    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:42.090    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:42.090    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:42.090    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:42.090    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:42.090    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:42.090    11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:42.090    "name": "raid_bdev1",
00:14:42.090    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:42.090    "strip_size_kb": 0,
00:14:42.090    "state": "online",
00:14:42.090    "raid_level": "raid1",
00:14:42.090    "superblock": true,
00:14:42.090    "num_base_bdevs": 4,
00:14:42.090    "num_base_bdevs_discovered": 3,
00:14:42.090    "num_base_bdevs_operational": 3,
00:14:42.090    "base_bdevs_list": [
00:14:42.090      {
00:14:42.090        "name": "spare",
00:14:42.090        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:42.090        "is_configured": true,
00:14:42.090        "data_offset": 2048,
00:14:42.090        "data_size": 63488
00:14:42.090      },
00:14:42.090      {
00:14:42.090        "name": null,
00:14:42.090        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:42.090        "is_configured": false,
00:14:42.090        "data_offset": 0,
00:14:42.090        "data_size": 63488
00:14:42.090      },
00:14:42.090      {
00:14:42.090        "name": "BaseBdev3",
00:14:42.090        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:42.090        "is_configured": true,
00:14:42.090        "data_offset": 2048,
00:14:42.090        "data_size": 63488
00:14:42.090      },
00:14:42.090      {
00:14:42.090        "name": "BaseBdev4",
00:14:42.090        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:42.090        "is_configured": true,
00:14:42.090        "data_offset": 2048,
00:14:42.090        "data_size": 63488
00:14:42.090      }
00:14:42.090    ]
00:14:42.090  }'
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:42.090   11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:42.349         87.12 IOPS,   261.38 MiB/s
[2024-12-16T11:36:08.416Z]  11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:14:42.349   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:42.349   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:42.349  [2024-12-16 11:36:08.364991] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:42.349  [2024-12-16 11:36:08.365030] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:42.609  
00:14:42.609                                                                                                  Latency(us)
00:14:42.609  
[2024-12-16T11:36:08.676Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:42.609  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:14:42.609  	 raid_bdev1          :       8.24      85.52     256.55       0.00     0.00   16331.64     298.70  118136.51
00:14:42.609  
[2024-12-16T11:36:08.676Z]  ===================================================================================================================
00:14:42.609  
[2024-12-16T11:36:08.676Z]  Total                       :                 85.52     256.55       0.00     0.00   16331.64     298.70  118136.51
00:14:42.609  [2024-12-16 11:36:08.464089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:42.609  [2024-12-16 11:36:08.464140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:42.609  [2024-12-16 11:36:08.464239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:42.609  [2024-12-16 11:36:08.464254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:14:42.609  {
00:14:42.609    "results": [
00:14:42.609      {
00:14:42.609        "job": "raid_bdev1",
00:14:42.609        "core_mask": "0x1",
00:14:42.609        "workload": "randrw",
00:14:42.609        "percentage": 50,
00:14:42.609        "status": "finished",
00:14:42.609        "queue_depth": 2,
00:14:42.609        "io_size": 3145728,
00:14:42.609        "runtime": 8.243968,
00:14:42.609        "iops": 85.5170713908642,
00:14:42.609        "mibps": 256.5512141725926,
00:14:42.609        "io_failed": 0,
00:14:42.609        "io_timeout": 0,
00:14:42.609        "avg_latency_us": 16331.64340549413,
00:14:42.609        "min_latency_us": 298.70393013100437,
00:14:42.609        "max_latency_us": 118136.51004366812
00:14:42.609      }
00:14:42.609    ],
00:14:42.609    "core_count": 1
00:14:42.609  }
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:42.609    11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:42.609    11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:42.609    11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:42.609    11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length
00:14:42.609    11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']'
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:42.609   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0
00:14:42.892  /dev/nbd0
00:14:42.892    11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:42.892  1+0 records in
00:14:42.892  1+0 records out
00:14:42.892  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428765 s, 9.6 MB/s
00:14:42.892    11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']'
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']'
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:14:42.892   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3')
00:14:42.893   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:42.893   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:14:42.893   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:42.893   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i
00:14:42.893   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:42.893   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:42.893   11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1
00:14:43.152  /dev/nbd1
00:14:43.152    11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:43.152  1+0 records in
00:14:43.152  1+0 records out
00:14:43.152  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035775 s, 11.4 MB/s
00:14:43.152    11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:43.152   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:14:43.412    11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}"
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']'
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4')
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:43.412   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1
00:14:43.672  /dev/nbd1
00:14:43.672    11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:43.672  1+0 records in
00:14:43.672  1+0 records out
00:14:43.672  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399976 s, 10.2 MB/s
00:14:43.672    11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:43.672   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:14:43.932    11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:43.932   11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:14:44.192    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.192  [2024-12-16 11:36:10.129362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:14:44.192  [2024-12-16 11:36:10.129419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:44.192  [2024-12-16 11:36:10.129442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:14:44.192  [2024-12-16 11:36:10.129452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:44.192  [2024-12-16 11:36:10.131557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:44.192  [2024-12-16 11:36:10.131589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:14:44.192  [2024-12-16 11:36:10.131672] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:14:44.192  [2024-12-16 11:36:10.131708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:44.192  [2024-12-16 11:36:10.131822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:44.192  [2024-12-16 11:36:10.131938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:14:44.192  spare
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.192  [2024-12-16 11:36:10.231837] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:14:44.192  [2024-12-16 11:36:10.231872] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:44.192  [2024-12-16 11:36:10.232206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0
00:14:44.192  [2024-12-16 11:36:10.232384] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:14:44.192  [2024-12-16 11:36:10.232395] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:14:44.192  [2024-12-16 11:36:10.232575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:44.192   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:44.192    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:44.192    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:44.192    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.192    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.452    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.452   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:44.452    "name": "raid_bdev1",
00:14:44.452    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:44.452    "strip_size_kb": 0,
00:14:44.452    "state": "online",
00:14:44.452    "raid_level": "raid1",
00:14:44.452    "superblock": true,
00:14:44.452    "num_base_bdevs": 4,
00:14:44.452    "num_base_bdevs_discovered": 3,
00:14:44.452    "num_base_bdevs_operational": 3,
00:14:44.452    "base_bdevs_list": [
00:14:44.452      {
00:14:44.452        "name": "spare",
00:14:44.452        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:44.452        "is_configured": true,
00:14:44.452        "data_offset": 2048,
00:14:44.452        "data_size": 63488
00:14:44.452      },
00:14:44.452      {
00:14:44.452        "name": null,
00:14:44.452        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:44.452        "is_configured": false,
00:14:44.452        "data_offset": 2048,
00:14:44.452        "data_size": 63488
00:14:44.452      },
00:14:44.452      {
00:14:44.452        "name": "BaseBdev3",
00:14:44.452        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:44.452        "is_configured": true,
00:14:44.452        "data_offset": 2048,
00:14:44.452        "data_size": 63488
00:14:44.452      },
00:14:44.452      {
00:14:44.452        "name": "BaseBdev4",
00:14:44.452        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:44.452        "is_configured": true,
00:14:44.452        "data_offset": 2048,
00:14:44.452        "data_size": 63488
00:14:44.452      }
00:14:44.452    ]
00:14:44.452  }'
00:14:44.452   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:44.452   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.712   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:44.712   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:44.712   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:44.712   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:44.712   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:44.712    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:44.712    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:44.712    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.712    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.712    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.712   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:44.712    "name": "raid_bdev1",
00:14:44.712    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:44.712    "strip_size_kb": 0,
00:14:44.712    "state": "online",
00:14:44.712    "raid_level": "raid1",
00:14:44.712    "superblock": true,
00:14:44.712    "num_base_bdevs": 4,
00:14:44.712    "num_base_bdevs_discovered": 3,
00:14:44.712    "num_base_bdevs_operational": 3,
00:14:44.712    "base_bdevs_list": [
00:14:44.712      {
00:14:44.712        "name": "spare",
00:14:44.712        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:44.712        "is_configured": true,
00:14:44.712        "data_offset": 2048,
00:14:44.712        "data_size": 63488
00:14:44.712      },
00:14:44.712      {
00:14:44.712        "name": null,
00:14:44.712        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:44.712        "is_configured": false,
00:14:44.712        "data_offset": 2048,
00:14:44.712        "data_size": 63488
00:14:44.712      },
00:14:44.712      {
00:14:44.712        "name": "BaseBdev3",
00:14:44.712        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:44.712        "is_configured": true,
00:14:44.712        "data_offset": 2048,
00:14:44.712        "data_size": 63488
00:14:44.712      },
00:14:44.712      {
00:14:44.712        "name": "BaseBdev4",
00:14:44.712        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:44.712        "is_configured": true,
00:14:44.712        "data_offset": 2048,
00:14:44.712        "data_size": 63488
00:14:44.712      }
00:14:44.712    ]
00:14:44.712  }'
00:14:44.712    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.972  [2024-12-16 11:36:10.872211] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.972    11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:44.972    "name": "raid_bdev1",
00:14:44.972    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:44.972    "strip_size_kb": 0,
00:14:44.972    "state": "online",
00:14:44.972    "raid_level": "raid1",
00:14:44.972    "superblock": true,
00:14:44.972    "num_base_bdevs": 4,
00:14:44.972    "num_base_bdevs_discovered": 2,
00:14:44.972    "num_base_bdevs_operational": 2,
00:14:44.972    "base_bdevs_list": [
00:14:44.972      {
00:14:44.972        "name": null,
00:14:44.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:44.972        "is_configured": false,
00:14:44.972        "data_offset": 0,
00:14:44.972        "data_size": 63488
00:14:44.972      },
00:14:44.972      {
00:14:44.972        "name": null,
00:14:44.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:44.972        "is_configured": false,
00:14:44.972        "data_offset": 2048,
00:14:44.972        "data_size": 63488
00:14:44.972      },
00:14:44.972      {
00:14:44.972        "name": "BaseBdev3",
00:14:44.972        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:44.972        "is_configured": true,
00:14:44.972        "data_offset": 2048,
00:14:44.972        "data_size": 63488
00:14:44.972      },
00:14:44.972      {
00:14:44.972        "name": "BaseBdev4",
00:14:44.972        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:44.972        "is_configured": true,
00:14:44.972        "data_offset": 2048,
00:14:44.972        "data_size": 63488
00:14:44.972      }
00:14:44.972    ]
00:14:44.972  }'
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:44.972   11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:45.542   11:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:14:45.542   11:36:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:45.542   11:36:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:45.542  [2024-12-16 11:36:11.347470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:45.542  [2024-12-16 11:36:11.347690] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6)
00:14:45.542  [2024-12-16 11:36:11.347714] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:14:45.542  [2024-12-16 11:36:11.347756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:45.542  [2024-12-16 11:36:11.351429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090
00:14:45.542   11:36:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:45.542   11:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1
00:14:45.542  [2024-12-16 11:36:11.353451] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:46.481    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:46.481    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:46.481    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:46.481    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:46.481    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:46.481    "name": "raid_bdev1",
00:14:46.481    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:46.481    "strip_size_kb": 0,
00:14:46.481    "state": "online",
00:14:46.481    "raid_level": "raid1",
00:14:46.481    "superblock": true,
00:14:46.481    "num_base_bdevs": 4,
00:14:46.481    "num_base_bdevs_discovered": 3,
00:14:46.481    "num_base_bdevs_operational": 3,
00:14:46.481    "process": {
00:14:46.481      "type": "rebuild",
00:14:46.481      "target": "spare",
00:14:46.481      "progress": {
00:14:46.481        "blocks": 20480,
00:14:46.481        "percent": 32
00:14:46.481      }
00:14:46.481    },
00:14:46.481    "base_bdevs_list": [
00:14:46.481      {
00:14:46.481        "name": "spare",
00:14:46.481        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:46.481        "is_configured": true,
00:14:46.481        "data_offset": 2048,
00:14:46.481        "data_size": 63488
00:14:46.481      },
00:14:46.481      {
00:14:46.481        "name": null,
00:14:46.481        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:46.481        "is_configured": false,
00:14:46.481        "data_offset": 2048,
00:14:46.481        "data_size": 63488
00:14:46.481      },
00:14:46.481      {
00:14:46.481        "name": "BaseBdev3",
00:14:46.481        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:46.481        "is_configured": true,
00:14:46.481        "data_offset": 2048,
00:14:46.481        "data_size": 63488
00:14:46.481      },
00:14:46.481      {
00:14:46.481        "name": "BaseBdev4",
00:14:46.481        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:46.481        "is_configured": true,
00:14:46.481        "data_offset": 2048,
00:14:46.481        "data_size": 63488
00:14:46.481      }
00:14:46.481    ]
00:14:46.481  }'
00:14:46.481    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:46.481    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:46.481   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:46.481  [2024-12-16 11:36:12.498273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:46.741  [2024-12-16 11:36:12.558041] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:14:46.741  [2024-12-16 11:36:12.558106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:46.741  [2024-12-16 11:36:12.558121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:46.741  [2024-12-16 11:36:12.558130] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:46.741    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:46.741    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:46.741    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:46.741    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:46.741    11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:46.741    "name": "raid_bdev1",
00:14:46.741    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:46.741    "strip_size_kb": 0,
00:14:46.741    "state": "online",
00:14:46.741    "raid_level": "raid1",
00:14:46.741    "superblock": true,
00:14:46.741    "num_base_bdevs": 4,
00:14:46.741    "num_base_bdevs_discovered": 2,
00:14:46.741    "num_base_bdevs_operational": 2,
00:14:46.741    "base_bdevs_list": [
00:14:46.741      {
00:14:46.741        "name": null,
00:14:46.741        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:46.741        "is_configured": false,
00:14:46.741        "data_offset": 0,
00:14:46.741        "data_size": 63488
00:14:46.741      },
00:14:46.741      {
00:14:46.741        "name": null,
00:14:46.741        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:46.741        "is_configured": false,
00:14:46.741        "data_offset": 2048,
00:14:46.741        "data_size": 63488
00:14:46.741      },
00:14:46.741      {
00:14:46.741        "name": "BaseBdev3",
00:14:46.741        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:46.741        "is_configured": true,
00:14:46.741        "data_offset": 2048,
00:14:46.741        "data_size": 63488
00:14:46.741      },
00:14:46.741      {
00:14:46.741        "name": "BaseBdev4",
00:14:46.741        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:46.741        "is_configured": true,
00:14:46.741        "data_offset": 2048,
00:14:46.741        "data_size": 63488
00:14:46.741      }
00:14:46.741    ]
00:14:46.741  }'
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:46.741   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:47.001   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:14:47.001   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:47.001   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:47.001  [2024-12-16 11:36:12.993531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:14:47.001  [2024-12-16 11:36:12.993608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:47.001  [2024-12-16 11:36:12.993635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:14:47.001  [2024-12-16 11:36:12.993646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:47.001  [2024-12-16 11:36:12.994087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:47.001  [2024-12-16 11:36:12.994113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:14:47.001  [2024-12-16 11:36:12.994198] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:14:47.001  [2024-12-16 11:36:12.994217] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6)
00:14:47.001  [2024-12-16 11:36:12.994236] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:14:47.001  [2024-12-16 11:36:12.994261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:14:47.001  spare
00:14:47.001  [2024-12-16 11:36:12.997861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160
00:14:47.001   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:47.001   11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1
00:14:47.001  [2024-12-16 11:36:12.999749] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:14:47.940   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:14:47.940   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:47.940   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:14:47.940   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare
00:14:47.940   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:48.200    "name": "raid_bdev1",
00:14:48.200    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:48.200    "strip_size_kb": 0,
00:14:48.200    "state": "online",
00:14:48.200    "raid_level": "raid1",
00:14:48.200    "superblock": true,
00:14:48.200    "num_base_bdevs": 4,
00:14:48.200    "num_base_bdevs_discovered": 3,
00:14:48.200    "num_base_bdevs_operational": 3,
00:14:48.200    "process": {
00:14:48.200      "type": "rebuild",
00:14:48.200      "target": "spare",
00:14:48.200      "progress": {
00:14:48.200        "blocks": 20480,
00:14:48.200        "percent": 32
00:14:48.200      }
00:14:48.200    },
00:14:48.200    "base_bdevs_list": [
00:14:48.200      {
00:14:48.200        "name": "spare",
00:14:48.200        "uuid": "2011920c-e1c1-5211-858e-ca9284518c90",
00:14:48.200        "is_configured": true,
00:14:48.200        "data_offset": 2048,
00:14:48.200        "data_size": 63488
00:14:48.200      },
00:14:48.200      {
00:14:48.200        "name": null,
00:14:48.200        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.200        "is_configured": false,
00:14:48.200        "data_offset": 2048,
00:14:48.200        "data_size": 63488
00:14:48.200      },
00:14:48.200      {
00:14:48.200        "name": "BaseBdev3",
00:14:48.200        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:48.200        "is_configured": true,
00:14:48.200        "data_offset": 2048,
00:14:48.200        "data_size": 63488
00:14:48.200      },
00:14:48.200      {
00:14:48.200        "name": "BaseBdev4",
00:14:48.200        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:48.200        "is_configured": true,
00:14:48.200        "data_offset": 2048,
00:14:48.200        "data_size": 63488
00:14:48.200      }
00:14:48.200    ]
00:14:48.200  }'
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:48.200  [2024-12-16 11:36:14.160927] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:48.200  [2024-12-16 11:36:14.204260] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:14:48.200  [2024-12-16 11:36:14.204332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:48.200  [2024-12-16 11:36:14.204351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:14:48.200  [2024-12-16 11:36:14.204358] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:48.200   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:48.200    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:48.460   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:48.460    "name": "raid_bdev1",
00:14:48.460    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:48.460    "strip_size_kb": 0,
00:14:48.460    "state": "online",
00:14:48.460    "raid_level": "raid1",
00:14:48.460    "superblock": true,
00:14:48.460    "num_base_bdevs": 4,
00:14:48.460    "num_base_bdevs_discovered": 2,
00:14:48.460    "num_base_bdevs_operational": 2,
00:14:48.460    "base_bdevs_list": [
00:14:48.460      {
00:14:48.460        "name": null,
00:14:48.460        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.460        "is_configured": false,
00:14:48.460        "data_offset": 0,
00:14:48.460        "data_size": 63488
00:14:48.460      },
00:14:48.460      {
00:14:48.460        "name": null,
00:14:48.460        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.460        "is_configured": false,
00:14:48.460        "data_offset": 2048,
00:14:48.460        "data_size": 63488
00:14:48.460      },
00:14:48.460      {
00:14:48.460        "name": "BaseBdev3",
00:14:48.460        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:48.460        "is_configured": true,
00:14:48.460        "data_offset": 2048,
00:14:48.460        "data_size": 63488
00:14:48.460      },
00:14:48.460      {
00:14:48.460        "name": "BaseBdev4",
00:14:48.460        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:48.460        "is_configured": true,
00:14:48.460        "data_offset": 2048,
00:14:48.460        "data_size": 63488
00:14:48.460      }
00:14:48.460    ]
00:14:48.460  }'
00:14:48.460   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:48.460   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:48.721   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:48.721   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:48.721   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:48.721   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:48.721   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:48.721    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:48.721    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:48.721    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:48.721    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:48.721    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:48.721   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:48.721    "name": "raid_bdev1",
00:14:48.721    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:48.721    "strip_size_kb": 0,
00:14:48.721    "state": "online",
00:14:48.721    "raid_level": "raid1",
00:14:48.721    "superblock": true,
00:14:48.721    "num_base_bdevs": 4,
00:14:48.721    "num_base_bdevs_discovered": 2,
00:14:48.721    "num_base_bdevs_operational": 2,
00:14:48.721    "base_bdevs_list": [
00:14:48.721      {
00:14:48.721        "name": null,
00:14:48.721        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.721        "is_configured": false,
00:14:48.721        "data_offset": 0,
00:14:48.721        "data_size": 63488
00:14:48.721      },
00:14:48.721      {
00:14:48.721        "name": null,
00:14:48.721        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.721        "is_configured": false,
00:14:48.721        "data_offset": 2048,
00:14:48.721        "data_size": 63488
00:14:48.721      },
00:14:48.721      {
00:14:48.721        "name": "BaseBdev3",
00:14:48.721        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:48.721        "is_configured": true,
00:14:48.721        "data_offset": 2048,
00:14:48.721        "data_size": 63488
00:14:48.721      },
00:14:48.721      {
00:14:48.721        "name": "BaseBdev4",
00:14:48.721        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:48.721        "is_configured": true,
00:14:48.721        "data_offset": 2048,
00:14:48.721        "data_size": 63488
00:14:48.721      }
00:14:48.721    ]
00:14:48.721  }'
00:14:48.721    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:48.721   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:48.721    11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:48.981  [2024-12-16 11:36:14.819569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:14:48.981  [2024-12-16 11:36:14.819616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:48.981  [2024-12-16 11:36:14.819637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80
00:14:48.981  [2024-12-16 11:36:14.819647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:48.981  [2024-12-16 11:36:14.820054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:48.981  [2024-12-16 11:36:14.820070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:14:48.981  [2024-12-16 11:36:14.820141] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:14:48.981  [2024-12-16 11:36:14.820154] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6)
00:14:48.981  [2024-12-16 11:36:14.820164] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:14:48.981  [2024-12-16 11:36:14.820176] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:14:48.981  BaseBdev1
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:48.981   11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:49.918    11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:49.918    11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:49.918    11:36:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:49.918    11:36:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:49.918    11:36:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:49.918   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:49.918    "name": "raid_bdev1",
00:14:49.918    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:49.918    "strip_size_kb": 0,
00:14:49.918    "state": "online",
00:14:49.918    "raid_level": "raid1",
00:14:49.919    "superblock": true,
00:14:49.919    "num_base_bdevs": 4,
00:14:49.919    "num_base_bdevs_discovered": 2,
00:14:49.919    "num_base_bdevs_operational": 2,
00:14:49.919    "base_bdevs_list": [
00:14:49.919      {
00:14:49.919        "name": null,
00:14:49.919        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:49.919        "is_configured": false,
00:14:49.919        "data_offset": 0,
00:14:49.919        "data_size": 63488
00:14:49.919      },
00:14:49.919      {
00:14:49.919        "name": null,
00:14:49.919        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:49.919        "is_configured": false,
00:14:49.919        "data_offset": 2048,
00:14:49.919        "data_size": 63488
00:14:49.919      },
00:14:49.919      {
00:14:49.919        "name": "BaseBdev3",
00:14:49.919        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:49.919        "is_configured": true,
00:14:49.919        "data_offset": 2048,
00:14:49.919        "data_size": 63488
00:14:49.919      },
00:14:49.919      {
00:14:49.919        "name": "BaseBdev4",
00:14:49.919        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:49.919        "is_configured": true,
00:14:49.919        "data_offset": 2048,
00:14:49.919        "data_size": 63488
00:14:49.919      }
00:14:49.919    ]
00:14:49.919  }'
00:14:49.919   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:49.919   11:36:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:50.489    "name": "raid_bdev1",
00:14:50.489    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:50.489    "strip_size_kb": 0,
00:14:50.489    "state": "online",
00:14:50.489    "raid_level": "raid1",
00:14:50.489    "superblock": true,
00:14:50.489    "num_base_bdevs": 4,
00:14:50.489    "num_base_bdevs_discovered": 2,
00:14:50.489    "num_base_bdevs_operational": 2,
00:14:50.489    "base_bdevs_list": [
00:14:50.489      {
00:14:50.489        "name": null,
00:14:50.489        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:50.489        "is_configured": false,
00:14:50.489        "data_offset": 0,
00:14:50.489        "data_size": 63488
00:14:50.489      },
00:14:50.489      {
00:14:50.489        "name": null,
00:14:50.489        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:50.489        "is_configured": false,
00:14:50.489        "data_offset": 2048,
00:14:50.489        "data_size": 63488
00:14:50.489      },
00:14:50.489      {
00:14:50.489        "name": "BaseBdev3",
00:14:50.489        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:50.489        "is_configured": true,
00:14:50.489        "data_offset": 2048,
00:14:50.489        "data_size": 63488
00:14:50.489      },
00:14:50.489      {
00:14:50.489        "name": "BaseBdev4",
00:14:50.489        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:50.489        "is_configured": true,
00:14:50.489        "data_offset": 2048,
00:14:50.489        "data_size": 63488
00:14:50.489      }
00:14:50.489    ]
00:14:50.489  }'
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:50.489    11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:50.489  [2024-12-16 11:36:16.488994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:50.489  [2024-12-16 11:36:16.489158] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6)
00:14:50.489  [2024-12-16 11:36:16.489175] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:14:50.489  request:
00:14:50.489  {
00:14:50.489  "base_bdev": "BaseBdev1",
00:14:50.489  "raid_bdev": "raid_bdev1",
00:14:50.489  "method": "bdev_raid_add_base_bdev",
00:14:50.489  "req_id": 1
00:14:50.489  }
00:14:50.489  Got JSON-RPC error response
00:14:50.489  response:
00:14:50.489  {
00:14:50.489  "code": -22,
00:14:50.489  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:14:50.489  }
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:14:50.489   11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:51.872    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:51.872    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:51.872    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:51.872    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:51.872    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:51.872   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:51.872    "name": "raid_bdev1",
00:14:51.872    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:51.872    "strip_size_kb": 0,
00:14:51.872    "state": "online",
00:14:51.872    "raid_level": "raid1",
00:14:51.872    "superblock": true,
00:14:51.872    "num_base_bdevs": 4,
00:14:51.872    "num_base_bdevs_discovered": 2,
00:14:51.872    "num_base_bdevs_operational": 2,
00:14:51.872    "base_bdevs_list": [
00:14:51.872      {
00:14:51.872        "name": null,
00:14:51.873        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:51.873        "is_configured": false,
00:14:51.873        "data_offset": 0,
00:14:51.873        "data_size": 63488
00:14:51.873      },
00:14:51.873      {
00:14:51.873        "name": null,
00:14:51.873        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:51.873        "is_configured": false,
00:14:51.873        "data_offset": 2048,
00:14:51.873        "data_size": 63488
00:14:51.873      },
00:14:51.873      {
00:14:51.873        "name": "BaseBdev3",
00:14:51.873        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:51.873        "is_configured": true,
00:14:51.873        "data_offset": 2048,
00:14:51.873        "data_size": 63488
00:14:51.873      },
00:14:51.873      {
00:14:51.873        "name": "BaseBdev4",
00:14:51.873        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:51.873        "is_configured": true,
00:14:51.873        "data_offset": 2048,
00:14:51.873        "data_size": 63488
00:14:51.873      }
00:14:51.873    ]
00:14:51.873  }'
00:14:51.873   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:51.873   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:52.133   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:14:52.133   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:14:52.133   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:14:52.133   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none
00:14:52.133   11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:14:52.133    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:52.133    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:52.133    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:52.133    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:52.133    11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:14:52.133    "name": "raid_bdev1",
00:14:52.133    "uuid": "08ed2e30-f4e6-4817-aede-b96a9d0a80e9",
00:14:52.133    "strip_size_kb": 0,
00:14:52.133    "state": "online",
00:14:52.133    "raid_level": "raid1",
00:14:52.133    "superblock": true,
00:14:52.133    "num_base_bdevs": 4,
00:14:52.133    "num_base_bdevs_discovered": 2,
00:14:52.133    "num_base_bdevs_operational": 2,
00:14:52.133    "base_bdevs_list": [
00:14:52.133      {
00:14:52.133        "name": null,
00:14:52.133        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:52.133        "is_configured": false,
00:14:52.133        "data_offset": 0,
00:14:52.133        "data_size": 63488
00:14:52.133      },
00:14:52.133      {
00:14:52.133        "name": null,
00:14:52.133        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:52.133        "is_configured": false,
00:14:52.133        "data_offset": 2048,
00:14:52.133        "data_size": 63488
00:14:52.133      },
00:14:52.133      {
00:14:52.133        "name": "BaseBdev3",
00:14:52.133        "uuid": "1e65e47c-bfc4-5458-a68e-2015d7435a39",
00:14:52.133        "is_configured": true,
00:14:52.133        "data_offset": 2048,
00:14:52.133        "data_size": 63488
00:14:52.133      },
00:14:52.133      {
00:14:52.133        "name": "BaseBdev4",
00:14:52.133        "uuid": "d1b478f9-e7d3-5026-90a4-42435991cb82",
00:14:52.133        "is_configured": true,
00:14:52.133        "data_offset": 2048,
00:14:52.133        "data_size": 63488
00:14:52.133      }
00:14:52.133    ]
00:14:52.133  }'
00:14:52.133    11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:14:52.133    11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 90114
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 90114 ']'
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 90114
00:14:52.133    11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:14:52.133    11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90114
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:14:52.133  killing process with pid 90114
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90114'
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 90114
00:14:52.133  Received shutdown signal, test time was about 17.965126 seconds
00:14:52.133  
00:14:52.133                                                                                                  Latency(us)
00:14:52.133  
[2024-12-16T11:36:18.200Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:52.133  
[2024-12-16T11:36:18.200Z]  ===================================================================================================================
00:14:52.133  
[2024-12-16T11:36:18.200Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:14:52.133  [2024-12-16 11:36:18.163191] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:52.133  [2024-12-16 11:36:18.163334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:52.133   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 90114
00:14:52.133  [2024-12-16 11:36:18.163406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:52.133  [2024-12-16 11:36:18.163421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:14:52.393  [2024-12-16 11:36:18.207948] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:52.393   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0
00:14:52.393  
00:14:52.393  real	0m19.969s
00:14:52.393  user	0m26.658s
00:14:52.393  sys	0m2.580s
00:14:52.393   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable
00:14:52.393   11:36:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x
00:14:52.393  ************************************
00:14:52.393  END TEST raid_rebuild_test_sb_io
00:14:52.393  ************************************
00:14:52.653   11:36:18 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4}
00:14:52.653   11:36:18 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false
00:14:52.653   11:36:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:14:52.653   11:36:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:14:52.653   11:36:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:14:52.653  ************************************
00:14:52.653  START TEST raid5f_state_function_test
00:14:52.653  ************************************
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:14:52.653    11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']'
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90819
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:14:52.653  Process raid pid: 90819
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90819'
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90819
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90819 ']'
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:14:52.653  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:14:52.653   11:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:52.653  [2024-12-16 11:36:18.607548] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:14:52.653  [2024-12-16 11:36:18.607686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:52.913  [2024-12-16 11:36:18.769647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:52.913  [2024-12-16 11:36:18.814333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:14:52.913  [2024-12-16 11:36:18.855903] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:52.913  [2024-12-16 11:36:18.855959] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:53.482  [2024-12-16 11:36:19.464462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:53.482  [2024-12-16 11:36:19.464511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:53.482  [2024-12-16 11:36:19.464525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:53.482  [2024-12-16 11:36:19.464545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:53.482  [2024-12-16 11:36:19.464552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:53.482  [2024-12-16 11:36:19.464564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:53.482   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:53.483    11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:53.483    11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:53.483    11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:53.483    11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:53.483    11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:53.483   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:53.483    "name": "Existed_Raid",
00:14:53.483    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:53.483    "strip_size_kb": 64,
00:14:53.483    "state": "configuring",
00:14:53.483    "raid_level": "raid5f",
00:14:53.483    "superblock": false,
00:14:53.483    "num_base_bdevs": 3,
00:14:53.483    "num_base_bdevs_discovered": 0,
00:14:53.483    "num_base_bdevs_operational": 3,
00:14:53.483    "base_bdevs_list": [
00:14:53.483      {
00:14:53.483        "name": "BaseBdev1",
00:14:53.483        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:53.483        "is_configured": false,
00:14:53.483        "data_offset": 0,
00:14:53.483        "data_size": 0
00:14:53.483      },
00:14:53.483      {
00:14:53.483        "name": "BaseBdev2",
00:14:53.483        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:53.483        "is_configured": false,
00:14:53.483        "data_offset": 0,
00:14:53.483        "data_size": 0
00:14:53.483      },
00:14:53.483      {
00:14:53.483        "name": "BaseBdev3",
00:14:53.483        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:53.483        "is_configured": false,
00:14:53.483        "data_offset": 0,
00:14:53.483        "data_size": 0
00:14:53.483      }
00:14:53.483    ]
00:14:53.483  }'
00:14:53.483   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:53.483   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.052   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:14:54.052   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.053  [2024-12-16 11:36:19.931583] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:54.053  [2024-12-16 11:36:19.931636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.053  [2024-12-16 11:36:19.943587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:54.053  [2024-12-16 11:36:19.943625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:54.053  [2024-12-16 11:36:19.943650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:54.053  [2024-12-16 11:36:19.943660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:54.053  [2024-12-16 11:36:19.943667] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:54.053  [2024-12-16 11:36:19.943677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.053  [2024-12-16 11:36:19.964392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:54.053  BaseBdev1
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.053   11:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.053  [
00:14:54.053  {
00:14:54.053  "name": "BaseBdev1",
00:14:54.053  "aliases": [
00:14:54.053  "a1e03904-99d0-4340-aca6-db138b8ebe0d"
00:14:54.053  ],
00:14:54.053  "product_name": "Malloc disk",
00:14:54.053  "block_size": 512,
00:14:54.053  "num_blocks": 65536,
00:14:54.053  "uuid": "a1e03904-99d0-4340-aca6-db138b8ebe0d",
00:14:54.053  "assigned_rate_limits": {
00:14:54.053  "rw_ios_per_sec": 0,
00:14:54.053  "rw_mbytes_per_sec": 0,
00:14:54.053  "r_mbytes_per_sec": 0,
00:14:54.053  "w_mbytes_per_sec": 0
00:14:54.053  },
00:14:54.053  "claimed": true,
00:14:54.053  "claim_type": "exclusive_write",
00:14:54.053  "zoned": false,
00:14:54.053  "supported_io_types": {
00:14:54.053  "read": true,
00:14:54.053  "write": true,
00:14:54.053  "unmap": true,
00:14:54.053  "flush": true,
00:14:54.053  "reset": true,
00:14:54.053  "nvme_admin": false,
00:14:54.053  "nvme_io": false,
00:14:54.053  "nvme_io_md": false,
00:14:54.053  "write_zeroes": true,
00:14:54.053  "zcopy": true,
00:14:54.053  "get_zone_info": false,
00:14:54.053  "zone_management": false,
00:14:54.053  "zone_append": false,
00:14:54.053  "compare": false,
00:14:54.053  "compare_and_write": false,
00:14:54.053  "abort": true,
00:14:54.053  "seek_hole": false,
00:14:54.053  "seek_data": false,
00:14:54.053  "copy": true,
00:14:54.053  "nvme_iov_md": false
00:14:54.053  },
00:14:54.053  "memory_domains": [
00:14:54.053  {
00:14:54.053  "dma_device_id": "system",
00:14:54.053  "dma_device_type": 1
00:14:54.053  },
00:14:54.053  {
00:14:54.053  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:54.053  "dma_device_type": 2
00:14:54.053  }
00:14:54.053  ],
00:14:54.053  "driver_specific": {}
00:14:54.053  }
00:14:54.053  ]
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:54.053    11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:54.053    11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:54.053    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.053    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.053    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:54.053    "name": "Existed_Raid",
00:14:54.053    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:54.053    "strip_size_kb": 64,
00:14:54.053    "state": "configuring",
00:14:54.053    "raid_level": "raid5f",
00:14:54.053    "superblock": false,
00:14:54.053    "num_base_bdevs": 3,
00:14:54.053    "num_base_bdevs_discovered": 1,
00:14:54.053    "num_base_bdevs_operational": 3,
00:14:54.053    "base_bdevs_list": [
00:14:54.053      {
00:14:54.053        "name": "BaseBdev1",
00:14:54.053        "uuid": "a1e03904-99d0-4340-aca6-db138b8ebe0d",
00:14:54.053        "is_configured": true,
00:14:54.053        "data_offset": 0,
00:14:54.053        "data_size": 65536
00:14:54.053      },
00:14:54.053      {
00:14:54.053        "name": "BaseBdev2",
00:14:54.053        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:54.053        "is_configured": false,
00:14:54.053        "data_offset": 0,
00:14:54.053        "data_size": 0
00:14:54.053      },
00:14:54.053      {
00:14:54.053        "name": "BaseBdev3",
00:14:54.053        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:54.053        "is_configured": false,
00:14:54.053        "data_offset": 0,
00:14:54.053        "data_size": 0
00:14:54.053      }
00:14:54.053    ]
00:14:54.053  }'
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:54.053   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.623  [2024-12-16 11:36:20.443657] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:54.623  [2024-12-16 11:36:20.443717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.623  [2024-12-16 11:36:20.451684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:54.623  [2024-12-16 11:36:20.453675] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:54.623  [2024-12-16 11:36:20.453712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:54.623  [2024-12-16 11:36:20.453721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:54.623  [2024-12-16 11:36:20.453732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:54.623    11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:54.623    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.623    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.623    11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:54.623    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:54.623    "name": "Existed_Raid",
00:14:54.623    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:54.623    "strip_size_kb": 64,
00:14:54.623    "state": "configuring",
00:14:54.623    "raid_level": "raid5f",
00:14:54.623    "superblock": false,
00:14:54.623    "num_base_bdevs": 3,
00:14:54.623    "num_base_bdevs_discovered": 1,
00:14:54.623    "num_base_bdevs_operational": 3,
00:14:54.623    "base_bdevs_list": [
00:14:54.623      {
00:14:54.623        "name": "BaseBdev1",
00:14:54.623        "uuid": "a1e03904-99d0-4340-aca6-db138b8ebe0d",
00:14:54.623        "is_configured": true,
00:14:54.623        "data_offset": 0,
00:14:54.623        "data_size": 65536
00:14:54.623      },
00:14:54.623      {
00:14:54.623        "name": "BaseBdev2",
00:14:54.623        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:54.623        "is_configured": false,
00:14:54.623        "data_offset": 0,
00:14:54.623        "data_size": 0
00:14:54.623      },
00:14:54.623      {
00:14:54.623        "name": "BaseBdev3",
00:14:54.623        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:54.623        "is_configured": false,
00:14:54.623        "data_offset": 0,
00:14:54.623        "data_size": 0
00:14:54.623      }
00:14:54.623    ]
00:14:54.623  }'
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:54.623   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.883  [2024-12-16 11:36:20.919913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:54.883  BaseBdev2
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:54.883   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:54.883  [
00:14:54.883  {
00:14:54.883  "name": "BaseBdev2",
00:14:54.883  "aliases": [
00:14:54.883  "1157ee32-f0f6-4db8-9ee4-29cb3fccfbf2"
00:14:54.883  ],
00:14:54.883  "product_name": "Malloc disk",
00:14:54.883  "block_size": 512,
00:14:54.883  "num_blocks": 65536,
00:14:54.883  "uuid": "1157ee32-f0f6-4db8-9ee4-29cb3fccfbf2",
00:14:54.883  "assigned_rate_limits": {
00:14:54.883  "rw_ios_per_sec": 0,
00:14:54.883  "rw_mbytes_per_sec": 0,
00:14:54.883  "r_mbytes_per_sec": 0,
00:14:54.883  "w_mbytes_per_sec": 0
00:14:54.883  },
00:14:54.883  "claimed": true,
00:14:54.883  "claim_type": "exclusive_write",
00:14:54.883  "zoned": false,
00:14:55.142  "supported_io_types": {
00:14:55.142  "read": true,
00:14:55.142  "write": true,
00:14:55.142  "unmap": true,
00:14:55.142  "flush": true,
00:14:55.142  "reset": true,
00:14:55.142  "nvme_admin": false,
00:14:55.142  "nvme_io": false,
00:14:55.142  "nvme_io_md": false,
00:14:55.142  "write_zeroes": true,
00:14:55.142  "zcopy": true,
00:14:55.142  "get_zone_info": false,
00:14:55.142  "zone_management": false,
00:14:55.142  "zone_append": false,
00:14:55.142  "compare": false,
00:14:55.142  "compare_and_write": false,
00:14:55.142  "abort": true,
00:14:55.142  "seek_hole": false,
00:14:55.142  "seek_data": false,
00:14:55.142  "copy": true,
00:14:55.142  "nvme_iov_md": false
00:14:55.142  },
00:14:55.142  "memory_domains": [
00:14:55.142  {
00:14:55.142  "dma_device_id": "system",
00:14:55.142  "dma_device_type": 1
00:14:55.142  },
00:14:55.142  {
00:14:55.142  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:55.142  "dma_device_type": 2
00:14:55.142  }
00:14:55.142  ],
00:14:55.142  "driver_specific": {}
00:14:55.142  }
00:14:55.142  ]
00:14:55.142   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:55.142   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:14:55.142   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:14:55.142   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:14:55.142   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:55.142   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:55.143    11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:55.143    11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:55.143    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:55.143    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.143    11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:55.143    "name": "Existed_Raid",
00:14:55.143    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:55.143    "strip_size_kb": 64,
00:14:55.143    "state": "configuring",
00:14:55.143    "raid_level": "raid5f",
00:14:55.143    "superblock": false,
00:14:55.143    "num_base_bdevs": 3,
00:14:55.143    "num_base_bdevs_discovered": 2,
00:14:55.143    "num_base_bdevs_operational": 3,
00:14:55.143    "base_bdevs_list": [
00:14:55.143      {
00:14:55.143        "name": "BaseBdev1",
00:14:55.143        "uuid": "a1e03904-99d0-4340-aca6-db138b8ebe0d",
00:14:55.143        "is_configured": true,
00:14:55.143        "data_offset": 0,
00:14:55.143        "data_size": 65536
00:14:55.143      },
00:14:55.143      {
00:14:55.143        "name": "BaseBdev2",
00:14:55.143        "uuid": "1157ee32-f0f6-4db8-9ee4-29cb3fccfbf2",
00:14:55.143        "is_configured": true,
00:14:55.143        "data_offset": 0,
00:14:55.143        "data_size": 65536
00:14:55.143      },
00:14:55.143      {
00:14:55.143        "name": "BaseBdev3",
00:14:55.143        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:55.143        "is_configured": false,
00:14:55.143        "data_offset": 0,
00:14:55.143        "data_size": 0
00:14:55.143      }
00:14:55.143    ]
00:14:55.143  }'
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:55.143   11:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.403  [2024-12-16 11:36:21.398216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:55.403  [2024-12-16 11:36:21.398279] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:14:55.403  [2024-12-16 11:36:21.398293] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:14:55.403  [2024-12-16 11:36:21.398616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:14:55.403  [2024-12-16 11:36:21.399066] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:14:55.403  [2024-12-16 11:36:21.399084] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:14:55.403  [2024-12-16 11:36:21.399330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:55.403  BaseBdev3
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.403  [
00:14:55.403  {
00:14:55.403  "name": "BaseBdev3",
00:14:55.403  "aliases": [
00:14:55.403  "e7342126-179d-411c-bde8-a577950f625f"
00:14:55.403  ],
00:14:55.403  "product_name": "Malloc disk",
00:14:55.403  "block_size": 512,
00:14:55.403  "num_blocks": 65536,
00:14:55.403  "uuid": "e7342126-179d-411c-bde8-a577950f625f",
00:14:55.403  "assigned_rate_limits": {
00:14:55.403  "rw_ios_per_sec": 0,
00:14:55.403  "rw_mbytes_per_sec": 0,
00:14:55.403  "r_mbytes_per_sec": 0,
00:14:55.403  "w_mbytes_per_sec": 0
00:14:55.403  },
00:14:55.403  "claimed": true,
00:14:55.403  "claim_type": "exclusive_write",
00:14:55.403  "zoned": false,
00:14:55.403  "supported_io_types": {
00:14:55.403  "read": true,
00:14:55.403  "write": true,
00:14:55.403  "unmap": true,
00:14:55.403  "flush": true,
00:14:55.403  "reset": true,
00:14:55.403  "nvme_admin": false,
00:14:55.403  "nvme_io": false,
00:14:55.403  "nvme_io_md": false,
00:14:55.403  "write_zeroes": true,
00:14:55.403  "zcopy": true,
00:14:55.403  "get_zone_info": false,
00:14:55.403  "zone_management": false,
00:14:55.403  "zone_append": false,
00:14:55.403  "compare": false,
00:14:55.403  "compare_and_write": false,
00:14:55.403  "abort": true,
00:14:55.403  "seek_hole": false,
00:14:55.403  "seek_data": false,
00:14:55.403  "copy": true,
00:14:55.403  "nvme_iov_md": false
00:14:55.403  },
00:14:55.403  "memory_domains": [
00:14:55.403  {
00:14:55.403  "dma_device_id": "system",
00:14:55.403  "dma_device_type": 1
00:14:55.403  },
00:14:55.403  {
00:14:55.403  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:55.403  "dma_device_type": 2
00:14:55.403  }
00:14:55.403  ],
00:14:55.403  "driver_specific": {}
00:14:55.403  }
00:14:55.403  ]
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:55.403   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:55.403    11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:55.403    11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:55.403    11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:55.403    11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.403    11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:55.663   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:55.663    "name": "Existed_Raid",
00:14:55.663    "uuid": "d18a1895-2d4d-4729-86d6-47b16813fa01",
00:14:55.663    "strip_size_kb": 64,
00:14:55.663    "state": "online",
00:14:55.663    "raid_level": "raid5f",
00:14:55.663    "superblock": false,
00:14:55.663    "num_base_bdevs": 3,
00:14:55.663    "num_base_bdevs_discovered": 3,
00:14:55.663    "num_base_bdevs_operational": 3,
00:14:55.663    "base_bdevs_list": [
00:14:55.663      {
00:14:55.663        "name": "BaseBdev1",
00:14:55.663        "uuid": "a1e03904-99d0-4340-aca6-db138b8ebe0d",
00:14:55.663        "is_configured": true,
00:14:55.663        "data_offset": 0,
00:14:55.663        "data_size": 65536
00:14:55.663      },
00:14:55.663      {
00:14:55.663        "name": "BaseBdev2",
00:14:55.663        "uuid": "1157ee32-f0f6-4db8-9ee4-29cb3fccfbf2",
00:14:55.663        "is_configured": true,
00:14:55.663        "data_offset": 0,
00:14:55.663        "data_size": 65536
00:14:55.663      },
00:14:55.663      {
00:14:55.663        "name": "BaseBdev3",
00:14:55.663        "uuid": "e7342126-179d-411c-bde8-a577950f625f",
00:14:55.663        "is_configured": true,
00:14:55.663        "data_offset": 0,
00:14:55.663        "data_size": 65536
00:14:55.663      }
00:14:55.663    ]
00:14:55.663  }'
00:14:55.663   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:55.663   11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:14:55.922    11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:14:55.922    11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:55.922    11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:14:55.922    11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:55.922  [2024-12-16 11:36:21.921617] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:55.922    11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:14:55.922    "name": "Existed_Raid",
00:14:55.922    "aliases": [
00:14:55.922      "d18a1895-2d4d-4729-86d6-47b16813fa01"
00:14:55.922    ],
00:14:55.922    "product_name": "Raid Volume",
00:14:55.922    "block_size": 512,
00:14:55.922    "num_blocks": 131072,
00:14:55.922    "uuid": "d18a1895-2d4d-4729-86d6-47b16813fa01",
00:14:55.922    "assigned_rate_limits": {
00:14:55.922      "rw_ios_per_sec": 0,
00:14:55.922      "rw_mbytes_per_sec": 0,
00:14:55.922      "r_mbytes_per_sec": 0,
00:14:55.922      "w_mbytes_per_sec": 0
00:14:55.922    },
00:14:55.922    "claimed": false,
00:14:55.922    "zoned": false,
00:14:55.922    "supported_io_types": {
00:14:55.922      "read": true,
00:14:55.922      "write": true,
00:14:55.922      "unmap": false,
00:14:55.922      "flush": false,
00:14:55.922      "reset": true,
00:14:55.922      "nvme_admin": false,
00:14:55.922      "nvme_io": false,
00:14:55.922      "nvme_io_md": false,
00:14:55.922      "write_zeroes": true,
00:14:55.922      "zcopy": false,
00:14:55.922      "get_zone_info": false,
00:14:55.922      "zone_management": false,
00:14:55.922      "zone_append": false,
00:14:55.922      "compare": false,
00:14:55.922      "compare_and_write": false,
00:14:55.922      "abort": false,
00:14:55.922      "seek_hole": false,
00:14:55.922      "seek_data": false,
00:14:55.922      "copy": false,
00:14:55.922      "nvme_iov_md": false
00:14:55.922    },
00:14:55.922    "driver_specific": {
00:14:55.922      "raid": {
00:14:55.922        "uuid": "d18a1895-2d4d-4729-86d6-47b16813fa01",
00:14:55.922        "strip_size_kb": 64,
00:14:55.922        "state": "online",
00:14:55.922        "raid_level": "raid5f",
00:14:55.922        "superblock": false,
00:14:55.922        "num_base_bdevs": 3,
00:14:55.922        "num_base_bdevs_discovered": 3,
00:14:55.922        "num_base_bdevs_operational": 3,
00:14:55.922        "base_bdevs_list": [
00:14:55.922          {
00:14:55.922            "name": "BaseBdev1",
00:14:55.922            "uuid": "a1e03904-99d0-4340-aca6-db138b8ebe0d",
00:14:55.922            "is_configured": true,
00:14:55.922            "data_offset": 0,
00:14:55.922            "data_size": 65536
00:14:55.922          },
00:14:55.922          {
00:14:55.922            "name": "BaseBdev2",
00:14:55.922            "uuid": "1157ee32-f0f6-4db8-9ee4-29cb3fccfbf2",
00:14:55.922            "is_configured": true,
00:14:55.922            "data_offset": 0,
00:14:55.922            "data_size": 65536
00:14:55.922          },
00:14:55.922          {
00:14:55.922            "name": "BaseBdev3",
00:14:55.922            "uuid": "e7342126-179d-411c-bde8-a577950f625f",
00:14:55.922            "is_configured": true,
00:14:55.922            "data_offset": 0,
00:14:55.922            "data_size": 65536
00:14:55.922          }
00:14:55.922        ]
00:14:55.922      }
00:14:55.922    }
00:14:55.922  }'
00:14:55.922    11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:14:55.922   11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:14:55.923  BaseBdev2
00:14:55.923  BaseBdev3'
00:14:56.182    11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.182  [2024-12-16 11:36:22.165046] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.182    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:56.182    "name": "Existed_Raid",
00:14:56.182    "uuid": "d18a1895-2d4d-4729-86d6-47b16813fa01",
00:14:56.182    "strip_size_kb": 64,
00:14:56.182    "state": "online",
00:14:56.182    "raid_level": "raid5f",
00:14:56.182    "superblock": false,
00:14:56.182    "num_base_bdevs": 3,
00:14:56.182    "num_base_bdevs_discovered": 2,
00:14:56.182    "num_base_bdevs_operational": 2,
00:14:56.182    "base_bdevs_list": [
00:14:56.182      {
00:14:56.182        "name": null,
00:14:56.182        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:56.182        "is_configured": false,
00:14:56.182        "data_offset": 0,
00:14:56.182        "data_size": 65536
00:14:56.182      },
00:14:56.182      {
00:14:56.182        "name": "BaseBdev2",
00:14:56.182        "uuid": "1157ee32-f0f6-4db8-9ee4-29cb3fccfbf2",
00:14:56.182        "is_configured": true,
00:14:56.182        "data_offset": 0,
00:14:56.182        "data_size": 65536
00:14:56.182      },
00:14:56.182      {
00:14:56.182        "name": "BaseBdev3",
00:14:56.182        "uuid": "e7342126-179d-411c-bde8-a577950f625f",
00:14:56.182        "is_configured": true,
00:14:56.182        "data_offset": 0,
00:14:56.182        "data_size": 65536
00:14:56.182      }
00:14:56.182    ]
00:14:56.182  }'
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:56.182   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.751  [2024-12-16 11:36:22.707488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:56.751  [2024-12-16 11:36:22.707601] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:56.751  [2024-12-16 11:36:22.718548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.751  [2024-12-16 11:36:22.770488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:14:56.751  [2024-12-16 11:36:22.770548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:14:56.751   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:14:56.751    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.011  BaseBdev2
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.011  [
00:14:57.011  {
00:14:57.011  "name": "BaseBdev2",
00:14:57.011  "aliases": [
00:14:57.011  "c37714ae-dc0f-45eb-bfe1-1fd5628f751e"
00:14:57.011  ],
00:14:57.011  "product_name": "Malloc disk",
00:14:57.011  "block_size": 512,
00:14:57.011  "num_blocks": 65536,
00:14:57.011  "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:14:57.011  "assigned_rate_limits": {
00:14:57.011  "rw_ios_per_sec": 0,
00:14:57.011  "rw_mbytes_per_sec": 0,
00:14:57.011  "r_mbytes_per_sec": 0,
00:14:57.011  "w_mbytes_per_sec": 0
00:14:57.011  },
00:14:57.011  "claimed": false,
00:14:57.011  "zoned": false,
00:14:57.011  "supported_io_types": {
00:14:57.011  "read": true,
00:14:57.011  "write": true,
00:14:57.011  "unmap": true,
00:14:57.011  "flush": true,
00:14:57.011  "reset": true,
00:14:57.011  "nvme_admin": false,
00:14:57.011  "nvme_io": false,
00:14:57.011  "nvme_io_md": false,
00:14:57.011  "write_zeroes": true,
00:14:57.011  "zcopy": true,
00:14:57.011  "get_zone_info": false,
00:14:57.011  "zone_management": false,
00:14:57.011  "zone_append": false,
00:14:57.011  "compare": false,
00:14:57.011  "compare_and_write": false,
00:14:57.011  "abort": true,
00:14:57.011  "seek_hole": false,
00:14:57.011  "seek_data": false,
00:14:57.011  "copy": true,
00:14:57.011  "nvme_iov_md": false
00:14:57.011  },
00:14:57.011  "memory_domains": [
00:14:57.011  {
00:14:57.011  "dma_device_id": "system",
00:14:57.011  "dma_device_type": 1
00:14:57.011  },
00:14:57.011  {
00:14:57.011  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:57.011  "dma_device_type": 2
00:14:57.011  }
00:14:57.011  ],
00:14:57.011  "driver_specific": {}
00:14:57.011  }
00:14:57.011  ]
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:14:57.011   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.012  BaseBdev3
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.012  [
00:14:57.012  {
00:14:57.012  "name": "BaseBdev3",
00:14:57.012  "aliases": [
00:14:57.012  "92de4235-7c85-48d8-ab82-57a6234e7f35"
00:14:57.012  ],
00:14:57.012  "product_name": "Malloc disk",
00:14:57.012  "block_size": 512,
00:14:57.012  "num_blocks": 65536,
00:14:57.012  "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:14:57.012  "assigned_rate_limits": {
00:14:57.012  "rw_ios_per_sec": 0,
00:14:57.012  "rw_mbytes_per_sec": 0,
00:14:57.012  "r_mbytes_per_sec": 0,
00:14:57.012  "w_mbytes_per_sec": 0
00:14:57.012  },
00:14:57.012  "claimed": false,
00:14:57.012  "zoned": false,
00:14:57.012  "supported_io_types": {
00:14:57.012  "read": true,
00:14:57.012  "write": true,
00:14:57.012  "unmap": true,
00:14:57.012  "flush": true,
00:14:57.012  "reset": true,
00:14:57.012  "nvme_admin": false,
00:14:57.012  "nvme_io": false,
00:14:57.012  "nvme_io_md": false,
00:14:57.012  "write_zeroes": true,
00:14:57.012  "zcopy": true,
00:14:57.012  "get_zone_info": false,
00:14:57.012  "zone_management": false,
00:14:57.012  "zone_append": false,
00:14:57.012  "compare": false,
00:14:57.012  "compare_and_write": false,
00:14:57.012  "abort": true,
00:14:57.012  "seek_hole": false,
00:14:57.012  "seek_data": false,
00:14:57.012  "copy": true,
00:14:57.012  "nvme_iov_md": false
00:14:57.012  },
00:14:57.012  "memory_domains": [
00:14:57.012  {
00:14:57.012  "dma_device_id": "system",
00:14:57.012  "dma_device_type": 1
00:14:57.012  },
00:14:57.012  {
00:14:57.012  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:57.012  "dma_device_type": 2
00:14:57.012  }
00:14:57.012  ],
00:14:57.012  "driver_specific": {}
00:14:57.012  }
00:14:57.012  ]
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.012  [2024-12-16 11:36:22.946577] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:57.012  [2024-12-16 11:36:22.946617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:57.012  [2024-12-16 11:36:22.946638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:57.012  [2024-12-16 11:36:22.948604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:57.012   11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:57.012    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:57.012    11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:57.012    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.012    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.012    11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.012   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:57.012    "name": "Existed_Raid",
00:14:57.012    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:57.012    "strip_size_kb": 64,
00:14:57.012    "state": "configuring",
00:14:57.012    "raid_level": "raid5f",
00:14:57.012    "superblock": false,
00:14:57.012    "num_base_bdevs": 3,
00:14:57.012    "num_base_bdevs_discovered": 2,
00:14:57.012    "num_base_bdevs_operational": 3,
00:14:57.012    "base_bdevs_list": [
00:14:57.012      {
00:14:57.012        "name": "BaseBdev1",
00:14:57.012        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:57.012        "is_configured": false,
00:14:57.012        "data_offset": 0,
00:14:57.012        "data_size": 0
00:14:57.012      },
00:14:57.012      {
00:14:57.012        "name": "BaseBdev2",
00:14:57.012        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:14:57.012        "is_configured": true,
00:14:57.012        "data_offset": 0,
00:14:57.012        "data_size": 65536
00:14:57.012      },
00:14:57.012      {
00:14:57.012        "name": "BaseBdev3",
00:14:57.012        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:14:57.012        "is_configured": true,
00:14:57.012        "data_offset": 0,
00:14:57.012        "data_size": 65536
00:14:57.012      }
00:14:57.012    ]
00:14:57.012  }'
00:14:57.012   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:57.012   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.581  [2024-12-16 11:36:23.381808] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:57.581    11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:57.581    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.581    11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:57.581    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.581    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:57.581    "name": "Existed_Raid",
00:14:57.581    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:57.581    "strip_size_kb": 64,
00:14:57.581    "state": "configuring",
00:14:57.581    "raid_level": "raid5f",
00:14:57.581    "superblock": false,
00:14:57.581    "num_base_bdevs": 3,
00:14:57.581    "num_base_bdevs_discovered": 1,
00:14:57.581    "num_base_bdevs_operational": 3,
00:14:57.581    "base_bdevs_list": [
00:14:57.581      {
00:14:57.581        "name": "BaseBdev1",
00:14:57.581        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:57.581        "is_configured": false,
00:14:57.581        "data_offset": 0,
00:14:57.581        "data_size": 0
00:14:57.581      },
00:14:57.581      {
00:14:57.581        "name": null,
00:14:57.581        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:14:57.581        "is_configured": false,
00:14:57.581        "data_offset": 0,
00:14:57.581        "data_size": 65536
00:14:57.581      },
00:14:57.581      {
00:14:57.581        "name": "BaseBdev3",
00:14:57.581        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:14:57.581        "is_configured": true,
00:14:57.581        "data_offset": 0,
00:14:57.581        "data_size": 65536
00:14:57.581      }
00:14:57.581    ]
00:14:57.581  }'
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:57.581   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.841    11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:14:57.841    11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:57.841    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.841    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:57.841    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:57.841   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:14:57.841   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:14:57.841   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:57.841   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.144  [2024-12-16 11:36:23.907706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:58.144  BaseBdev1
00:14:58.144   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:58.144   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:14:58.144   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:14:58.144   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:14:58.144   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:14:58.144   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.145  [
00:14:58.145  {
00:14:58.145  "name": "BaseBdev1",
00:14:58.145  "aliases": [
00:14:58.145  "7a6955f0-53b6-47cc-af57-9594623eda05"
00:14:58.145  ],
00:14:58.145  "product_name": "Malloc disk",
00:14:58.145  "block_size": 512,
00:14:58.145  "num_blocks": 65536,
00:14:58.145  "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:14:58.145  "assigned_rate_limits": {
00:14:58.145  "rw_ios_per_sec": 0,
00:14:58.145  "rw_mbytes_per_sec": 0,
00:14:58.145  "r_mbytes_per_sec": 0,
00:14:58.145  "w_mbytes_per_sec": 0
00:14:58.145  },
00:14:58.145  "claimed": true,
00:14:58.145  "claim_type": "exclusive_write",
00:14:58.145  "zoned": false,
00:14:58.145  "supported_io_types": {
00:14:58.145  "read": true,
00:14:58.145  "write": true,
00:14:58.145  "unmap": true,
00:14:58.145  "flush": true,
00:14:58.145  "reset": true,
00:14:58.145  "nvme_admin": false,
00:14:58.145  "nvme_io": false,
00:14:58.145  "nvme_io_md": false,
00:14:58.145  "write_zeroes": true,
00:14:58.145  "zcopy": true,
00:14:58.145  "get_zone_info": false,
00:14:58.145  "zone_management": false,
00:14:58.145  "zone_append": false,
00:14:58.145  "compare": false,
00:14:58.145  "compare_and_write": false,
00:14:58.145  "abort": true,
00:14:58.145  "seek_hole": false,
00:14:58.145  "seek_data": false,
00:14:58.145  "copy": true,
00:14:58.145  "nvme_iov_md": false
00:14:58.145  },
00:14:58.145  "memory_domains": [
00:14:58.145  {
00:14:58.145  "dma_device_id": "system",
00:14:58.145  "dma_device_type": 1
00:14:58.145  },
00:14:58.145  {
00:14:58.145  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:58.145  "dma_device_type": 2
00:14:58.145  }
00:14:58.145  ],
00:14:58.145  "driver_specific": {}
00:14:58.145  }
00:14:58.145  ]
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:58.145    11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:58.145    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:58.145    11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:58.145    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.145    11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:58.145    "name": "Existed_Raid",
00:14:58.145    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:58.145    "strip_size_kb": 64,
00:14:58.145    "state": "configuring",
00:14:58.145    "raid_level": "raid5f",
00:14:58.145    "superblock": false,
00:14:58.145    "num_base_bdevs": 3,
00:14:58.145    "num_base_bdevs_discovered": 2,
00:14:58.145    "num_base_bdevs_operational": 3,
00:14:58.145    "base_bdevs_list": [
00:14:58.145      {
00:14:58.145        "name": "BaseBdev1",
00:14:58.145        "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:14:58.145        "is_configured": true,
00:14:58.145        "data_offset": 0,
00:14:58.145        "data_size": 65536
00:14:58.145      },
00:14:58.145      {
00:14:58.145        "name": null,
00:14:58.145        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:14:58.145        "is_configured": false,
00:14:58.145        "data_offset": 0,
00:14:58.145        "data_size": 65536
00:14:58.145      },
00:14:58.145      {
00:14:58.145        "name": "BaseBdev3",
00:14:58.145        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:14:58.145        "is_configured": true,
00:14:58.145        "data_offset": 0,
00:14:58.145        "data_size": 65536
00:14:58.145      }
00:14:58.145    ]
00:14:58.145  }'
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:58.145   11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.405    11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:58.405    11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:14:58.405    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:58.405    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.405    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:58.405   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:14:58.405   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:14:58.405   11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:58.405   11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.405  [2024-12-16 11:36:24.466819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:58.664    11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:58.664    11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:58.664    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:58.664    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.664    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:58.664    "name": "Existed_Raid",
00:14:58.664    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:58.664    "strip_size_kb": 64,
00:14:58.664    "state": "configuring",
00:14:58.664    "raid_level": "raid5f",
00:14:58.664    "superblock": false,
00:14:58.664    "num_base_bdevs": 3,
00:14:58.664    "num_base_bdevs_discovered": 1,
00:14:58.664    "num_base_bdevs_operational": 3,
00:14:58.664    "base_bdevs_list": [
00:14:58.664      {
00:14:58.664        "name": "BaseBdev1",
00:14:58.664        "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:14:58.664        "is_configured": true,
00:14:58.664        "data_offset": 0,
00:14:58.664        "data_size": 65536
00:14:58.664      },
00:14:58.664      {
00:14:58.664        "name": null,
00:14:58.664        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:14:58.664        "is_configured": false,
00:14:58.664        "data_offset": 0,
00:14:58.664        "data_size": 65536
00:14:58.664      },
00:14:58.664      {
00:14:58.664        "name": null,
00:14:58.664        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:14:58.664        "is_configured": false,
00:14:58.664        "data_offset": 0,
00:14:58.664        "data_size": 65536
00:14:58.664      }
00:14:58.664    ]
00:14:58.664  }'
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:58.664   11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.924    11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:58.924    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:58.924    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:58.924    11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:14:58.924    11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:59.183   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:14:59.183   11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:14:59.183   11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:59.183   11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.183  [2024-12-16 11:36:24.997972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:59.183   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:59.184   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:59.184   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:59.184    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:59.184    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:59.184    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.184    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:59.184    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:59.184   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:59.184    "name": "Existed_Raid",
00:14:59.184    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:59.184    "strip_size_kb": 64,
00:14:59.184    "state": "configuring",
00:14:59.184    "raid_level": "raid5f",
00:14:59.184    "superblock": false,
00:14:59.184    "num_base_bdevs": 3,
00:14:59.184    "num_base_bdevs_discovered": 2,
00:14:59.184    "num_base_bdevs_operational": 3,
00:14:59.184    "base_bdevs_list": [
00:14:59.184      {
00:14:59.184        "name": "BaseBdev1",
00:14:59.184        "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:14:59.184        "is_configured": true,
00:14:59.184        "data_offset": 0,
00:14:59.184        "data_size": 65536
00:14:59.184      },
00:14:59.184      {
00:14:59.184        "name": null,
00:14:59.184        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:14:59.184        "is_configured": false,
00:14:59.184        "data_offset": 0,
00:14:59.184        "data_size": 65536
00:14:59.184      },
00:14:59.184      {
00:14:59.184        "name": "BaseBdev3",
00:14:59.184        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:14:59.184        "is_configured": true,
00:14:59.184        "data_offset": 0,
00:14:59.184        "data_size": 65536
00:14:59.184      }
00:14:59.184    ]
00:14:59.184  }'
00:14:59.184   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:59.184   11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.443    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:59.443    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:59.443    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.443    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:14:59.702    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:59.702   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:14:59.702   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:14:59.702   11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:59.702   11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.702  [2024-12-16 11:36:25.553024] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:14:59.703    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:59.703    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:59.703    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:59.703    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.703    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:14:59.703    "name": "Existed_Raid",
00:14:59.703    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:59.703    "strip_size_kb": 64,
00:14:59.703    "state": "configuring",
00:14:59.703    "raid_level": "raid5f",
00:14:59.703    "superblock": false,
00:14:59.703    "num_base_bdevs": 3,
00:14:59.703    "num_base_bdevs_discovered": 1,
00:14:59.703    "num_base_bdevs_operational": 3,
00:14:59.703    "base_bdevs_list": [
00:14:59.703      {
00:14:59.703        "name": null,
00:14:59.703        "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:14:59.703        "is_configured": false,
00:14:59.703        "data_offset": 0,
00:14:59.703        "data_size": 65536
00:14:59.703      },
00:14:59.703      {
00:14:59.703        "name": null,
00:14:59.703        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:14:59.703        "is_configured": false,
00:14:59.703        "data_offset": 0,
00:14:59.703        "data_size": 65536
00:14:59.703      },
00:14:59.703      {
00:14:59.703        "name": "BaseBdev3",
00:14:59.703        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:14:59.703        "is_configured": true,
00:14:59.703        "data_offset": 0,
00:14:59.703        "data_size": 65536
00:14:59.703      }
00:14:59.703    ]
00:14:59.703  }'
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:14:59.703   11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.962    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:14:59.962    11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:14:59.962    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:14:59.962    11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:14:59.962    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.222  [2024-12-16 11:36:26.046683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:00.222    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:00.222    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:00.222    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.222    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.222    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:00.222    "name": "Existed_Raid",
00:15:00.222    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:00.222    "strip_size_kb": 64,
00:15:00.222    "state": "configuring",
00:15:00.222    "raid_level": "raid5f",
00:15:00.222    "superblock": false,
00:15:00.222    "num_base_bdevs": 3,
00:15:00.222    "num_base_bdevs_discovered": 2,
00:15:00.222    "num_base_bdevs_operational": 3,
00:15:00.222    "base_bdevs_list": [
00:15:00.222      {
00:15:00.222        "name": null,
00:15:00.222        "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:15:00.222        "is_configured": false,
00:15:00.222        "data_offset": 0,
00:15:00.222        "data_size": 65536
00:15:00.222      },
00:15:00.222      {
00:15:00.222        "name": "BaseBdev2",
00:15:00.222        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:15:00.222        "is_configured": true,
00:15:00.222        "data_offset": 0,
00:15:00.222        "data_size": 65536
00:15:00.222      },
00:15:00.222      {
00:15:00.222        "name": "BaseBdev3",
00:15:00.222        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:15:00.222        "is_configured": true,
00:15:00.222        "data_offset": 0,
00:15:00.222        "data_size": 65536
00:15:00.222      }
00:15:00.222    ]
00:15:00.222  }'
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:00.222   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.482   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:15:00.482    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7a6955f0-53b6-47cc-af57-9594623eda05
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.742  [2024-12-16 11:36:26.576651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:15:00.742  [2024-12-16 11:36:26.576698] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:15:00.742  [2024-12-16 11:36:26.576707] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:15:00.742  [2024-12-16 11:36:26.576947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:15:00.742  [2024-12-16 11:36:26.577378] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:15:00.742  [2024-12-16 11:36:26.577398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:15:00.742  [2024-12-16 11:36:26.577592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:00.742  NewBaseBdev
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.742   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.742  [
00:15:00.742  {
00:15:00.742  "name": "NewBaseBdev",
00:15:00.742  "aliases": [
00:15:00.742  "7a6955f0-53b6-47cc-af57-9594623eda05"
00:15:00.742  ],
00:15:00.742  "product_name": "Malloc disk",
00:15:00.742  "block_size": 512,
00:15:00.742  "num_blocks": 65536,
00:15:00.743  "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:15:00.743  "assigned_rate_limits": {
00:15:00.743  "rw_ios_per_sec": 0,
00:15:00.743  "rw_mbytes_per_sec": 0,
00:15:00.743  "r_mbytes_per_sec": 0,
00:15:00.743  "w_mbytes_per_sec": 0
00:15:00.743  },
00:15:00.743  "claimed": true,
00:15:00.743  "claim_type": "exclusive_write",
00:15:00.743  "zoned": false,
00:15:00.743  "supported_io_types": {
00:15:00.743  "read": true,
00:15:00.743  "write": true,
00:15:00.743  "unmap": true,
00:15:00.743  "flush": true,
00:15:00.743  "reset": true,
00:15:00.743  "nvme_admin": false,
00:15:00.743  "nvme_io": false,
00:15:00.743  "nvme_io_md": false,
00:15:00.743  "write_zeroes": true,
00:15:00.743  "zcopy": true,
00:15:00.743  "get_zone_info": false,
00:15:00.743  "zone_management": false,
00:15:00.743  "zone_append": false,
00:15:00.743  "compare": false,
00:15:00.743  "compare_and_write": false,
00:15:00.743  "abort": true,
00:15:00.743  "seek_hole": false,
00:15:00.743  "seek_data": false,
00:15:00.743  "copy": true,
00:15:00.743  "nvme_iov_md": false
00:15:00.743  },
00:15:00.743  "memory_domains": [
00:15:00.743  {
00:15:00.743  "dma_device_id": "system",
00:15:00.743  "dma_device_type": 1
00:15:00.743  },
00:15:00.743  {
00:15:00.743  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:00.743  "dma_device_type": 2
00:15:00.743  }
00:15:00.743  ],
00:15:00.743  "driver_specific": {}
00:15:00.743  }
00:15:00.743  ]
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:00.743    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:00.743    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:00.743    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:00.743    11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:00.743    11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:00.743    "name": "Existed_Raid",
00:15:00.743    "uuid": "816dbfeb-af68-49f9-baf0-8f9bd14f019c",
00:15:00.743    "strip_size_kb": 64,
00:15:00.743    "state": "online",
00:15:00.743    "raid_level": "raid5f",
00:15:00.743    "superblock": false,
00:15:00.743    "num_base_bdevs": 3,
00:15:00.743    "num_base_bdevs_discovered": 3,
00:15:00.743    "num_base_bdevs_operational": 3,
00:15:00.743    "base_bdevs_list": [
00:15:00.743      {
00:15:00.743        "name": "NewBaseBdev",
00:15:00.743        "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:15:00.743        "is_configured": true,
00:15:00.743        "data_offset": 0,
00:15:00.743        "data_size": 65536
00:15:00.743      },
00:15:00.743      {
00:15:00.743        "name": "BaseBdev2",
00:15:00.743        "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:15:00.743        "is_configured": true,
00:15:00.743        "data_offset": 0,
00:15:00.743        "data_size": 65536
00:15:00.743      },
00:15:00.743      {
00:15:00.743        "name": "BaseBdev3",
00:15:00.743        "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:15:00.743        "is_configured": true,
00:15:00.743        "data_offset": 0,
00:15:00.743        "data_size": 65536
00:15:00.743      }
00:15:00.743    ]
00:15:00.743  }'
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:00.743   11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:01.002   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:15:01.002   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:15:01.002   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:15:01.002   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:15:01.002   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:15:01.002   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:15:01.002    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:15:01.002    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:15:01.002    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:01.002    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:01.262  [2024-12-16 11:36:27.072069] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:01.262    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:01.262   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:15:01.262    "name": "Existed_Raid",
00:15:01.262    "aliases": [
00:15:01.262      "816dbfeb-af68-49f9-baf0-8f9bd14f019c"
00:15:01.262    ],
00:15:01.262    "product_name": "Raid Volume",
00:15:01.262    "block_size": 512,
00:15:01.262    "num_blocks": 131072,
00:15:01.262    "uuid": "816dbfeb-af68-49f9-baf0-8f9bd14f019c",
00:15:01.262    "assigned_rate_limits": {
00:15:01.262      "rw_ios_per_sec": 0,
00:15:01.263      "rw_mbytes_per_sec": 0,
00:15:01.263      "r_mbytes_per_sec": 0,
00:15:01.263      "w_mbytes_per_sec": 0
00:15:01.263    },
00:15:01.263    "claimed": false,
00:15:01.263    "zoned": false,
00:15:01.263    "supported_io_types": {
00:15:01.263      "read": true,
00:15:01.263      "write": true,
00:15:01.263      "unmap": false,
00:15:01.263      "flush": false,
00:15:01.263      "reset": true,
00:15:01.263      "nvme_admin": false,
00:15:01.263      "nvme_io": false,
00:15:01.263      "nvme_io_md": false,
00:15:01.263      "write_zeroes": true,
00:15:01.263      "zcopy": false,
00:15:01.263      "get_zone_info": false,
00:15:01.263      "zone_management": false,
00:15:01.263      "zone_append": false,
00:15:01.263      "compare": false,
00:15:01.263      "compare_and_write": false,
00:15:01.263      "abort": false,
00:15:01.263      "seek_hole": false,
00:15:01.263      "seek_data": false,
00:15:01.263      "copy": false,
00:15:01.263      "nvme_iov_md": false
00:15:01.263    },
00:15:01.263    "driver_specific": {
00:15:01.263      "raid": {
00:15:01.263        "uuid": "816dbfeb-af68-49f9-baf0-8f9bd14f019c",
00:15:01.263        "strip_size_kb": 64,
00:15:01.263        "state": "online",
00:15:01.263        "raid_level": "raid5f",
00:15:01.263        "superblock": false,
00:15:01.263        "num_base_bdevs": 3,
00:15:01.263        "num_base_bdevs_discovered": 3,
00:15:01.263        "num_base_bdevs_operational": 3,
00:15:01.263        "base_bdevs_list": [
00:15:01.263          {
00:15:01.263            "name": "NewBaseBdev",
00:15:01.263            "uuid": "7a6955f0-53b6-47cc-af57-9594623eda05",
00:15:01.263            "is_configured": true,
00:15:01.263            "data_offset": 0,
00:15:01.263            "data_size": 65536
00:15:01.263          },
00:15:01.263          {
00:15:01.263            "name": "BaseBdev2",
00:15:01.263            "uuid": "c37714ae-dc0f-45eb-bfe1-1fd5628f751e",
00:15:01.263            "is_configured": true,
00:15:01.263            "data_offset": 0,
00:15:01.263            "data_size": 65536
00:15:01.263          },
00:15:01.263          {
00:15:01.263            "name": "BaseBdev3",
00:15:01.263            "uuid": "92de4235-7c85-48d8-ab82-57a6234e7f35",
00:15:01.263            "is_configured": true,
00:15:01.263            "data_offset": 0,
00:15:01.263            "data_size": 65536
00:15:01.263          }
00:15:01.263        ]
00:15:01.263      }
00:15:01.263    }
00:15:01.263  }'
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:15:01.263  BaseBdev2
00:15:01.263  BaseBdev3'
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:01.263   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:01.263    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:01.523    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:01.523   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:01.523   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:01.523   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:15:01.523   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:01.523   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:01.523  [2024-12-16 11:36:27.367367] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:01.523  [2024-12-16 11:36:27.367396] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:01.523  [2024-12-16 11:36:27.367480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:01.523  [2024-12-16 11:36:27.367734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:01.523  [2024-12-16 11:36:27.367748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:15:01.523   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90819
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90819 ']'
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90819
00:15:01.524    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:15:01.524    11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90819
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90819'
00:15:01.524  killing process with pid 90819
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90819
00:15:01.524  [2024-12-16 11:36:27.407568] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:01.524   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90819
00:15:01.524  [2024-12-16 11:36:27.437929] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:15:01.784  
00:15:01.784  real	0m9.164s
00:15:01.784  user	0m15.747s
00:15:01.784  sys	0m1.875s
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:15:01.784  ************************************
00:15:01.784  END TEST raid5f_state_function_test
00:15:01.784  ************************************
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:01.784   11:36:27 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true
00:15:01.784   11:36:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:15:01.784   11:36:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:15:01.784   11:36:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:15:01.784  ************************************
00:15:01.784  START TEST raid5f_state_function_test_sb
00:15:01.784  ************************************
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:15:01.784    11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']'
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91424
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91424'
00:15:01.784  Process raid pid: 91424
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91424
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91424 ']'
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:01.784  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:15:01.784   11:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:01.784  [2024-12-16 11:36:27.846983] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:15:01.784  [2024-12-16 11:36:27.847109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:02.044  [2024-12-16 11:36:28.006615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:02.044  [2024-12-16 11:36:28.054379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:15:02.044  [2024-12-16 11:36:28.096074] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:02.044  [2024-12-16 11:36:28.096184] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:02.984  [2024-12-16 11:36:28.689044] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:02.984  [2024-12-16 11:36:28.689149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:02.984  [2024-12-16 11:36:28.689184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:02.984  [2024-12-16 11:36:28.689208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:02.984  [2024-12-16 11:36:28.689227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:02.984  [2024-12-16 11:36:28.689251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:02.984    11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:02.984    11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:02.984    11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:02.984    11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:02.984    11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:02.984    "name": "Existed_Raid",
00:15:02.984    "uuid": "a6f0c601-e061-4850-994d-f9b502c44558",
00:15:02.984    "strip_size_kb": 64,
00:15:02.984    "state": "configuring",
00:15:02.984    "raid_level": "raid5f",
00:15:02.984    "superblock": true,
00:15:02.984    "num_base_bdevs": 3,
00:15:02.984    "num_base_bdevs_discovered": 0,
00:15:02.984    "num_base_bdevs_operational": 3,
00:15:02.984    "base_bdevs_list": [
00:15:02.984      {
00:15:02.984        "name": "BaseBdev1",
00:15:02.984        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:02.984        "is_configured": false,
00:15:02.984        "data_offset": 0,
00:15:02.984        "data_size": 0
00:15:02.984      },
00:15:02.984      {
00:15:02.984        "name": "BaseBdev2",
00:15:02.984        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:02.984        "is_configured": false,
00:15:02.984        "data_offset": 0,
00:15:02.984        "data_size": 0
00:15:02.984      },
00:15:02.984      {
00:15:02.984        "name": "BaseBdev3",
00:15:02.984        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:02.984        "is_configured": false,
00:15:02.984        "data_offset": 0,
00:15:02.984        "data_size": 0
00:15:02.984      }
00:15:02.984    ]
00:15:02.984  }'
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:02.984   11:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.244   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:15:03.244   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.244   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.244  [2024-12-16 11:36:29.172131] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:03.244  [2024-12-16 11:36:29.172173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:15:03.244   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.244   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:15:03.244   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.244   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.245  [2024-12-16 11:36:29.184146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:03.245  [2024-12-16 11:36:29.184230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:03.245  [2024-12-16 11:36:29.184260] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:03.245  [2024-12-16 11:36:29.184285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:03.245  [2024-12-16 11:36:29.184304] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:03.245  [2024-12-16 11:36:29.184331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.245  [2024-12-16 11:36:29.204846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:03.245  BaseBdev1
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.245  [
00:15:03.245  {
00:15:03.245  "name": "BaseBdev1",
00:15:03.245  "aliases": [
00:15:03.245  "c3f61219-c415-4d9e-9902-537460590d77"
00:15:03.245  ],
00:15:03.245  "product_name": "Malloc disk",
00:15:03.245  "block_size": 512,
00:15:03.245  "num_blocks": 65536,
00:15:03.245  "uuid": "c3f61219-c415-4d9e-9902-537460590d77",
00:15:03.245  "assigned_rate_limits": {
00:15:03.245  "rw_ios_per_sec": 0,
00:15:03.245  "rw_mbytes_per_sec": 0,
00:15:03.245  "r_mbytes_per_sec": 0,
00:15:03.245  "w_mbytes_per_sec": 0
00:15:03.245  },
00:15:03.245  "claimed": true,
00:15:03.245  "claim_type": "exclusive_write",
00:15:03.245  "zoned": false,
00:15:03.245  "supported_io_types": {
00:15:03.245  "read": true,
00:15:03.245  "write": true,
00:15:03.245  "unmap": true,
00:15:03.245  "flush": true,
00:15:03.245  "reset": true,
00:15:03.245  "nvme_admin": false,
00:15:03.245  "nvme_io": false,
00:15:03.245  "nvme_io_md": false,
00:15:03.245  "write_zeroes": true,
00:15:03.245  "zcopy": true,
00:15:03.245  "get_zone_info": false,
00:15:03.245  "zone_management": false,
00:15:03.245  "zone_append": false,
00:15:03.245  "compare": false,
00:15:03.245  "compare_and_write": false,
00:15:03.245  "abort": true,
00:15:03.245  "seek_hole": false,
00:15:03.245  "seek_data": false,
00:15:03.245  "copy": true,
00:15:03.245  "nvme_iov_md": false
00:15:03.245  },
00:15:03.245  "memory_domains": [
00:15:03.245  {
00:15:03.245  "dma_device_id": "system",
00:15:03.245  "dma_device_type": 1
00:15:03.245  },
00:15:03.245  {
00:15:03.245  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:03.245  "dma_device_type": 2
00:15:03.245  }
00:15:03.245  ],
00:15:03.245  "driver_specific": {}
00:15:03.245  }
00:15:03.245  ]
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:03.245    11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:03.245    11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.245    11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.245    11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:03.245    11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:03.245    "name": "Existed_Raid",
00:15:03.245    "uuid": "20a475c5-0f1b-468f-a292-58dd0a089a6a",
00:15:03.245    "strip_size_kb": 64,
00:15:03.245    "state": "configuring",
00:15:03.245    "raid_level": "raid5f",
00:15:03.245    "superblock": true,
00:15:03.245    "num_base_bdevs": 3,
00:15:03.245    "num_base_bdevs_discovered": 1,
00:15:03.245    "num_base_bdevs_operational": 3,
00:15:03.245    "base_bdevs_list": [
00:15:03.245      {
00:15:03.245        "name": "BaseBdev1",
00:15:03.245        "uuid": "c3f61219-c415-4d9e-9902-537460590d77",
00:15:03.245        "is_configured": true,
00:15:03.245        "data_offset": 2048,
00:15:03.245        "data_size": 63488
00:15:03.245      },
00:15:03.245      {
00:15:03.245        "name": "BaseBdev2",
00:15:03.245        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:03.245        "is_configured": false,
00:15:03.245        "data_offset": 0,
00:15:03.245        "data_size": 0
00:15:03.245      },
00:15:03.245      {
00:15:03.245        "name": "BaseBdev3",
00:15:03.245        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:03.245        "is_configured": false,
00:15:03.245        "data_offset": 0,
00:15:03.245        "data_size": 0
00:15:03.245      }
00:15:03.245    ]
00:15:03.245  }'
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:03.245   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.820  [2024-12-16 11:36:29.732018] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:03.820  [2024-12-16 11:36:29.732076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.820  [2024-12-16 11:36:29.744021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:03.820  [2024-12-16 11:36:29.745962] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:03.820  [2024-12-16 11:36:29.746005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:03.820  [2024-12-16 11:36:29.746014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:03.820  [2024-12-16 11:36:29.746025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:03.820    11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:03.820    11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:03.820    11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:03.820    11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:03.820    11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:03.820    "name": "Existed_Raid",
00:15:03.820    "uuid": "06090325-6288-4da9-8917-5b4ad811e0e2",
00:15:03.820    "strip_size_kb": 64,
00:15:03.820    "state": "configuring",
00:15:03.820    "raid_level": "raid5f",
00:15:03.820    "superblock": true,
00:15:03.820    "num_base_bdevs": 3,
00:15:03.820    "num_base_bdevs_discovered": 1,
00:15:03.820    "num_base_bdevs_operational": 3,
00:15:03.820    "base_bdevs_list": [
00:15:03.820      {
00:15:03.820        "name": "BaseBdev1",
00:15:03.820        "uuid": "c3f61219-c415-4d9e-9902-537460590d77",
00:15:03.820        "is_configured": true,
00:15:03.820        "data_offset": 2048,
00:15:03.820        "data_size": 63488
00:15:03.820      },
00:15:03.820      {
00:15:03.820        "name": "BaseBdev2",
00:15:03.820        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:03.820        "is_configured": false,
00:15:03.820        "data_offset": 0,
00:15:03.820        "data_size": 0
00:15:03.820      },
00:15:03.820      {
00:15:03.820        "name": "BaseBdev3",
00:15:03.820        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:03.820        "is_configured": false,
00:15:03.820        "data_offset": 0,
00:15:03.820        "data_size": 0
00:15:03.820      }
00:15:03.820    ]
00:15:03.820  }'
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:03.820   11:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.390  [2024-12-16 11:36:30.229071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:04.390  BaseBdev2
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.390  [
00:15:04.390  {
00:15:04.390  "name": "BaseBdev2",
00:15:04.390  "aliases": [
00:15:04.390  "537c9a55-6c6a-49c6-abe9-9bda8be09bb2"
00:15:04.390  ],
00:15:04.390  "product_name": "Malloc disk",
00:15:04.390  "block_size": 512,
00:15:04.390  "num_blocks": 65536,
00:15:04.390  "uuid": "537c9a55-6c6a-49c6-abe9-9bda8be09bb2",
00:15:04.390  "assigned_rate_limits": {
00:15:04.390  "rw_ios_per_sec": 0,
00:15:04.390  "rw_mbytes_per_sec": 0,
00:15:04.390  "r_mbytes_per_sec": 0,
00:15:04.390  "w_mbytes_per_sec": 0
00:15:04.390  },
00:15:04.390  "claimed": true,
00:15:04.390  "claim_type": "exclusive_write",
00:15:04.390  "zoned": false,
00:15:04.390  "supported_io_types": {
00:15:04.390  "read": true,
00:15:04.390  "write": true,
00:15:04.390  "unmap": true,
00:15:04.390  "flush": true,
00:15:04.390  "reset": true,
00:15:04.390  "nvme_admin": false,
00:15:04.390  "nvme_io": false,
00:15:04.390  "nvme_io_md": false,
00:15:04.390  "write_zeroes": true,
00:15:04.390  "zcopy": true,
00:15:04.390  "get_zone_info": false,
00:15:04.390  "zone_management": false,
00:15:04.390  "zone_append": false,
00:15:04.390  "compare": false,
00:15:04.390  "compare_and_write": false,
00:15:04.390  "abort": true,
00:15:04.390  "seek_hole": false,
00:15:04.390  "seek_data": false,
00:15:04.390  "copy": true,
00:15:04.390  "nvme_iov_md": false
00:15:04.390  },
00:15:04.390  "memory_domains": [
00:15:04.390  {
00:15:04.390  "dma_device_id": "system",
00:15:04.390  "dma_device_type": 1
00:15:04.390  },
00:15:04.390  {
00:15:04.390  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:04.390  "dma_device_type": 2
00:15:04.390  }
00:15:04.390  ],
00:15:04.390  "driver_specific": {}
00:15:04.390  }
00:15:04.390  ]
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:04.390    11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:04.390    11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:04.390    11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.390    11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.390    11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:04.390    "name": "Existed_Raid",
00:15:04.390    "uuid": "06090325-6288-4da9-8917-5b4ad811e0e2",
00:15:04.390    "strip_size_kb": 64,
00:15:04.390    "state": "configuring",
00:15:04.390    "raid_level": "raid5f",
00:15:04.390    "superblock": true,
00:15:04.390    "num_base_bdevs": 3,
00:15:04.390    "num_base_bdevs_discovered": 2,
00:15:04.390    "num_base_bdevs_operational": 3,
00:15:04.390    "base_bdevs_list": [
00:15:04.390      {
00:15:04.390        "name": "BaseBdev1",
00:15:04.390        "uuid": "c3f61219-c415-4d9e-9902-537460590d77",
00:15:04.390        "is_configured": true,
00:15:04.390        "data_offset": 2048,
00:15:04.390        "data_size": 63488
00:15:04.390      },
00:15:04.390      {
00:15:04.390        "name": "BaseBdev2",
00:15:04.390        "uuid": "537c9a55-6c6a-49c6-abe9-9bda8be09bb2",
00:15:04.390        "is_configured": true,
00:15:04.390        "data_offset": 2048,
00:15:04.390        "data_size": 63488
00:15:04.390      },
00:15:04.390      {
00:15:04.390        "name": "BaseBdev3",
00:15:04.390        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:04.390        "is_configured": false,
00:15:04.390        "data_offset": 0,
00:15:04.390        "data_size": 0
00:15:04.390      }
00:15:04.390    ]
00:15:04.390  }'
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:04.390   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.960  [2024-12-16 11:36:30.751242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:04.960  [2024-12-16 11:36:30.751475] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:15:04.960  [2024-12-16 11:36:30.751493] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:04.960  [2024-12-16 11:36:30.751800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:15:04.960  BaseBdev3
00:15:04.960  [2024-12-16 11:36:30.752256] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:15:04.960  [2024-12-16 11:36:30.752280] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:15:04.960  [2024-12-16 11:36:30.752428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.960  [
00:15:04.960  {
00:15:04.960  "name": "BaseBdev3",
00:15:04.960  "aliases": [
00:15:04.960  "bcde5083-ca14-4759-89eb-68fafc0a0057"
00:15:04.960  ],
00:15:04.960  "product_name": "Malloc disk",
00:15:04.960  "block_size": 512,
00:15:04.960  "num_blocks": 65536,
00:15:04.960  "uuid": "bcde5083-ca14-4759-89eb-68fafc0a0057",
00:15:04.960  "assigned_rate_limits": {
00:15:04.960  "rw_ios_per_sec": 0,
00:15:04.960  "rw_mbytes_per_sec": 0,
00:15:04.960  "r_mbytes_per_sec": 0,
00:15:04.960  "w_mbytes_per_sec": 0
00:15:04.960  },
00:15:04.960  "claimed": true,
00:15:04.960  "claim_type": "exclusive_write",
00:15:04.960  "zoned": false,
00:15:04.960  "supported_io_types": {
00:15:04.960  "read": true,
00:15:04.960  "write": true,
00:15:04.960  "unmap": true,
00:15:04.960  "flush": true,
00:15:04.960  "reset": true,
00:15:04.960  "nvme_admin": false,
00:15:04.960  "nvme_io": false,
00:15:04.960  "nvme_io_md": false,
00:15:04.960  "write_zeroes": true,
00:15:04.960  "zcopy": true,
00:15:04.960  "get_zone_info": false,
00:15:04.960  "zone_management": false,
00:15:04.960  "zone_append": false,
00:15:04.960  "compare": false,
00:15:04.960  "compare_and_write": false,
00:15:04.960  "abort": true,
00:15:04.960  "seek_hole": false,
00:15:04.960  "seek_data": false,
00:15:04.960  "copy": true,
00:15:04.960  "nvme_iov_md": false
00:15:04.960  },
00:15:04.960  "memory_domains": [
00:15:04.960  {
00:15:04.960  "dma_device_id": "system",
00:15:04.960  "dma_device_type": 1
00:15:04.960  },
00:15:04.960  {
00:15:04.960  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:04.960  "dma_device_type": 2
00:15:04.960  }
00:15:04.960  ],
00:15:04.960  "driver_specific": {}
00:15:04.960  }
00:15:04.960  ]
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:04.960    11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:04.960    11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:04.960    11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:04.960    11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:04.960    11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:04.960   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:04.960    "name": "Existed_Raid",
00:15:04.960    "uuid": "06090325-6288-4da9-8917-5b4ad811e0e2",
00:15:04.960    "strip_size_kb": 64,
00:15:04.960    "state": "online",
00:15:04.960    "raid_level": "raid5f",
00:15:04.960    "superblock": true,
00:15:04.960    "num_base_bdevs": 3,
00:15:04.960    "num_base_bdevs_discovered": 3,
00:15:04.960    "num_base_bdevs_operational": 3,
00:15:04.960    "base_bdevs_list": [
00:15:04.960      {
00:15:04.960        "name": "BaseBdev1",
00:15:04.960        "uuid": "c3f61219-c415-4d9e-9902-537460590d77",
00:15:04.960        "is_configured": true,
00:15:04.960        "data_offset": 2048,
00:15:04.960        "data_size": 63488
00:15:04.960      },
00:15:04.960      {
00:15:04.960        "name": "BaseBdev2",
00:15:04.960        "uuid": "537c9a55-6c6a-49c6-abe9-9bda8be09bb2",
00:15:04.960        "is_configured": true,
00:15:04.961        "data_offset": 2048,
00:15:04.961        "data_size": 63488
00:15:04.961      },
00:15:04.961      {
00:15:04.961        "name": "BaseBdev3",
00:15:04.961        "uuid": "bcde5083-ca14-4759-89eb-68fafc0a0057",
00:15:04.961        "is_configured": true,
00:15:04.961        "data_offset": 2048,
00:15:04.961        "data_size": 63488
00:15:04.961      }
00:15:04.961    ]
00:15:04.961  }'
00:15:04.961   11:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:04.961   11:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:05.221   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:15:05.221   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:15:05.221   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:15:05.221   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:15:05.221   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:15:05.221   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:15:05.221    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:15:05.221    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:05.221    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:05.221    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:15:05.221  [2024-12-16 11:36:31.218707] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:05.221    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:05.221   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:15:05.221    "name": "Existed_Raid",
00:15:05.221    "aliases": [
00:15:05.221      "06090325-6288-4da9-8917-5b4ad811e0e2"
00:15:05.221    ],
00:15:05.221    "product_name": "Raid Volume",
00:15:05.221    "block_size": 512,
00:15:05.221    "num_blocks": 126976,
00:15:05.221    "uuid": "06090325-6288-4da9-8917-5b4ad811e0e2",
00:15:05.221    "assigned_rate_limits": {
00:15:05.221      "rw_ios_per_sec": 0,
00:15:05.221      "rw_mbytes_per_sec": 0,
00:15:05.221      "r_mbytes_per_sec": 0,
00:15:05.221      "w_mbytes_per_sec": 0
00:15:05.221    },
00:15:05.221    "claimed": false,
00:15:05.221    "zoned": false,
00:15:05.221    "supported_io_types": {
00:15:05.221      "read": true,
00:15:05.221      "write": true,
00:15:05.221      "unmap": false,
00:15:05.221      "flush": false,
00:15:05.221      "reset": true,
00:15:05.221      "nvme_admin": false,
00:15:05.221      "nvme_io": false,
00:15:05.221      "nvme_io_md": false,
00:15:05.221      "write_zeroes": true,
00:15:05.221      "zcopy": false,
00:15:05.221      "get_zone_info": false,
00:15:05.221      "zone_management": false,
00:15:05.221      "zone_append": false,
00:15:05.221      "compare": false,
00:15:05.221      "compare_and_write": false,
00:15:05.221      "abort": false,
00:15:05.221      "seek_hole": false,
00:15:05.221      "seek_data": false,
00:15:05.221      "copy": false,
00:15:05.221      "nvme_iov_md": false
00:15:05.221    },
00:15:05.221    "driver_specific": {
00:15:05.221      "raid": {
00:15:05.221        "uuid": "06090325-6288-4da9-8917-5b4ad811e0e2",
00:15:05.221        "strip_size_kb": 64,
00:15:05.221        "state": "online",
00:15:05.221        "raid_level": "raid5f",
00:15:05.221        "superblock": true,
00:15:05.221        "num_base_bdevs": 3,
00:15:05.221        "num_base_bdevs_discovered": 3,
00:15:05.221        "num_base_bdevs_operational": 3,
00:15:05.221        "base_bdevs_list": [
00:15:05.221          {
00:15:05.221            "name": "BaseBdev1",
00:15:05.221            "uuid": "c3f61219-c415-4d9e-9902-537460590d77",
00:15:05.221            "is_configured": true,
00:15:05.221            "data_offset": 2048,
00:15:05.221            "data_size": 63488
00:15:05.221          },
00:15:05.221          {
00:15:05.221            "name": "BaseBdev2",
00:15:05.221            "uuid": "537c9a55-6c6a-49c6-abe9-9bda8be09bb2",
00:15:05.221            "is_configured": true,
00:15:05.221            "data_offset": 2048,
00:15:05.221            "data_size": 63488
00:15:05.221          },
00:15:05.221          {
00:15:05.221            "name": "BaseBdev3",
00:15:05.221            "uuid": "bcde5083-ca14-4759-89eb-68fafc0a0057",
00:15:05.221            "is_configured": true,
00:15:05.221            "data_offset": 2048,
00:15:05.221            "data_size": 63488
00:15:05.221          }
00:15:05.221        ]
00:15:05.221      }
00:15:05.221    }
00:15:05.221  }'
00:15:05.221    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:15:05.481  BaseBdev2
00:15:05.481  BaseBdev3'
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:05.481    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:05.481  [2024-12-16 11:36:31.490095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:05.481   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:05.482   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:05.482    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:05.482    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:05.482    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:05.482    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:05.482    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:05.741   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:05.741    "name": "Existed_Raid",
00:15:05.741    "uuid": "06090325-6288-4da9-8917-5b4ad811e0e2",
00:15:05.741    "strip_size_kb": 64,
00:15:05.741    "state": "online",
00:15:05.741    "raid_level": "raid5f",
00:15:05.741    "superblock": true,
00:15:05.741    "num_base_bdevs": 3,
00:15:05.741    "num_base_bdevs_discovered": 2,
00:15:05.741    "num_base_bdevs_operational": 2,
00:15:05.741    "base_bdevs_list": [
00:15:05.741      {
00:15:05.741        "name": null,
00:15:05.741        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:05.741        "is_configured": false,
00:15:05.741        "data_offset": 0,
00:15:05.741        "data_size": 63488
00:15:05.741      },
00:15:05.741      {
00:15:05.741        "name": "BaseBdev2",
00:15:05.741        "uuid": "537c9a55-6c6a-49c6-abe9-9bda8be09bb2",
00:15:05.741        "is_configured": true,
00:15:05.741        "data_offset": 2048,
00:15:05.741        "data_size": 63488
00:15:05.741      },
00:15:05.741      {
00:15:05.741        "name": "BaseBdev3",
00:15:05.741        "uuid": "bcde5083-ca14-4759-89eb-68fafc0a0057",
00:15:05.741        "is_configured": true,
00:15:05.741        "data_offset": 2048,
00:15:05.741        "data_size": 63488
00:15:05.741      }
00:15:05.741    ]
00:15:05.741  }'
00:15:05.741   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:05.741   11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.001   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:15:06.001   11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:15:06.001    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:06.001    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.001    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.001    11:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:15:06.001    11:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.001  [2024-12-16 11:36:32.008721] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:06.001  [2024-12-16 11:36:32.008931] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:06.001  [2024-12-16 11:36:32.020213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:15:06.001   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:15:06.001    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:06.001    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.001    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.001    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:15:06.001    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262  [2024-12-16 11:36:32.076180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:06.262  [2024-12-16 11:36:32.076245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:15:06.262    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:15:06.262    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:06.262    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']'
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262  BaseBdev2
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262  [
00:15:06.262  {
00:15:06.262  "name": "BaseBdev2",
00:15:06.262  "aliases": [
00:15:06.262  "fd4e989a-dbef-4846-bf6d-b46bee81f363"
00:15:06.262  ],
00:15:06.262  "product_name": "Malloc disk",
00:15:06.262  "block_size": 512,
00:15:06.262  "num_blocks": 65536,
00:15:06.262  "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:06.262  "assigned_rate_limits": {
00:15:06.262  "rw_ios_per_sec": 0,
00:15:06.262  "rw_mbytes_per_sec": 0,
00:15:06.262  "r_mbytes_per_sec": 0,
00:15:06.262  "w_mbytes_per_sec": 0
00:15:06.262  },
00:15:06.262  "claimed": false,
00:15:06.262  "zoned": false,
00:15:06.262  "supported_io_types": {
00:15:06.262  "read": true,
00:15:06.262  "write": true,
00:15:06.262  "unmap": true,
00:15:06.262  "flush": true,
00:15:06.262  "reset": true,
00:15:06.262  "nvme_admin": false,
00:15:06.262  "nvme_io": false,
00:15:06.262  "nvme_io_md": false,
00:15:06.262  "write_zeroes": true,
00:15:06.262  "zcopy": true,
00:15:06.262  "get_zone_info": false,
00:15:06.262  "zone_management": false,
00:15:06.262  "zone_append": false,
00:15:06.262  "compare": false,
00:15:06.262  "compare_and_write": false,
00:15:06.262  "abort": true,
00:15:06.262  "seek_hole": false,
00:15:06.262  "seek_data": false,
00:15:06.262  "copy": true,
00:15:06.262  "nvme_iov_md": false
00:15:06.262  },
00:15:06.262  "memory_domains": [
00:15:06.262  {
00:15:06.262  "dma_device_id": "system",
00:15:06.262  "dma_device_type": 1
00:15:06.262  },
00:15:06.262  {
00:15:06.262  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:06.262  "dma_device_type": 2
00:15:06.262  }
00:15:06.262  ],
00:15:06.262  "driver_specific": {}
00:15:06.262  }
00:15:06.262  ]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262  BaseBdev3
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.262   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.262  [
00:15:06.262  {
00:15:06.262  "name": "BaseBdev3",
00:15:06.262  "aliases": [
00:15:06.262  "75bf7213-9fc5-4aae-82a9-7adc7499d5bc"
00:15:06.262  ],
00:15:06.262  "product_name": "Malloc disk",
00:15:06.262  "block_size": 512,
00:15:06.262  "num_blocks": 65536,
00:15:06.262  "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:06.262  "assigned_rate_limits": {
00:15:06.262  "rw_ios_per_sec": 0,
00:15:06.262  "rw_mbytes_per_sec": 0,
00:15:06.262  "r_mbytes_per_sec": 0,
00:15:06.262  "w_mbytes_per_sec": 0
00:15:06.262  },
00:15:06.262  "claimed": false,
00:15:06.262  "zoned": false,
00:15:06.262  "supported_io_types": {
00:15:06.262  "read": true,
00:15:06.262  "write": true,
00:15:06.262  "unmap": true,
00:15:06.262  "flush": true,
00:15:06.262  "reset": true,
00:15:06.262  "nvme_admin": false,
00:15:06.262  "nvme_io": false,
00:15:06.262  "nvme_io_md": false,
00:15:06.262  "write_zeroes": true,
00:15:06.262  "zcopy": true,
00:15:06.262  "get_zone_info": false,
00:15:06.262  "zone_management": false,
00:15:06.262  "zone_append": false,
00:15:06.262  "compare": false,
00:15:06.262  "compare_and_write": false,
00:15:06.262  "abort": true,
00:15:06.262  "seek_hole": false,
00:15:06.262  "seek_data": false,
00:15:06.262  "copy": true,
00:15:06.262  "nvme_iov_md": false
00:15:06.262  },
00:15:06.262  "memory_domains": [
00:15:06.262  {
00:15:06.262  "dma_device_id": "system",
00:15:06.262  "dma_device_type": 1
00:15:06.262  },
00:15:06.262  {
00:15:06.262  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:06.262  "dma_device_type": 2
00:15:06.263  }
00:15:06.263  ],
00:15:06.263  "driver_specific": {}
00:15:06.263  }
00:15:06.263  ]
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.263  [2024-12-16 11:36:32.204533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:06.263  [2024-12-16 11:36:32.204629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:06.263  [2024-12-16 11:36:32.204670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:06.263  [2024-12-16 11:36:32.206474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:06.263    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:06.263    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:06.263    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.263    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.263    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:06.263    "name": "Existed_Raid",
00:15:06.263    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:06.263    "strip_size_kb": 64,
00:15:06.263    "state": "configuring",
00:15:06.263    "raid_level": "raid5f",
00:15:06.263    "superblock": true,
00:15:06.263    "num_base_bdevs": 3,
00:15:06.263    "num_base_bdevs_discovered": 2,
00:15:06.263    "num_base_bdevs_operational": 3,
00:15:06.263    "base_bdevs_list": [
00:15:06.263      {
00:15:06.263        "name": "BaseBdev1",
00:15:06.263        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:06.263        "is_configured": false,
00:15:06.263        "data_offset": 0,
00:15:06.263        "data_size": 0
00:15:06.263      },
00:15:06.263      {
00:15:06.263        "name": "BaseBdev2",
00:15:06.263        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:06.263        "is_configured": true,
00:15:06.263        "data_offset": 2048,
00:15:06.263        "data_size": 63488
00:15:06.263      },
00:15:06.263      {
00:15:06.263        "name": "BaseBdev3",
00:15:06.263        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:06.263        "is_configured": true,
00:15:06.263        "data_offset": 2048,
00:15:06.263        "data_size": 63488
00:15:06.263      }
00:15:06.263    ]
00:15:06.263  }'
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:06.263   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.833  [2024-12-16 11:36:32.667732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:06.833    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:06.833    11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:06.833    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:06.833    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:06.833    11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:06.833    "name": "Existed_Raid",
00:15:06.833    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:06.833    "strip_size_kb": 64,
00:15:06.833    "state": "configuring",
00:15:06.833    "raid_level": "raid5f",
00:15:06.833    "superblock": true,
00:15:06.833    "num_base_bdevs": 3,
00:15:06.833    "num_base_bdevs_discovered": 1,
00:15:06.833    "num_base_bdevs_operational": 3,
00:15:06.833    "base_bdevs_list": [
00:15:06.833      {
00:15:06.833        "name": "BaseBdev1",
00:15:06.833        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:06.833        "is_configured": false,
00:15:06.833        "data_offset": 0,
00:15:06.833        "data_size": 0
00:15:06.833      },
00:15:06.833      {
00:15:06.833        "name": null,
00:15:06.833        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:06.833        "is_configured": false,
00:15:06.833        "data_offset": 0,
00:15:06.833        "data_size": 63488
00:15:06.833      },
00:15:06.833      {
00:15:06.833        "name": "BaseBdev3",
00:15:06.833        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:06.833        "is_configured": true,
00:15:06.833        "data_offset": 2048,
00:15:06.833        "data_size": 63488
00:15:06.833      }
00:15:06.833    ]
00:15:06.833  }'
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:06.833   11:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.093    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:15:07.093    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:07.093    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.093    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.353    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.353  [2024-12-16 11:36:33.213664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:07.353  BaseBdev1
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:07.353   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.354  [
00:15:07.354  {
00:15:07.354  "name": "BaseBdev1",
00:15:07.354  "aliases": [
00:15:07.354  "ef38849e-e0e8-4e00-9615-c79f3eccac2d"
00:15:07.354  ],
00:15:07.354  "product_name": "Malloc disk",
00:15:07.354  "block_size": 512,
00:15:07.354  "num_blocks": 65536,
00:15:07.354  "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:07.354  "assigned_rate_limits": {
00:15:07.354  "rw_ios_per_sec": 0,
00:15:07.354  "rw_mbytes_per_sec": 0,
00:15:07.354  "r_mbytes_per_sec": 0,
00:15:07.354  "w_mbytes_per_sec": 0
00:15:07.354  },
00:15:07.354  "claimed": true,
00:15:07.354  "claim_type": "exclusive_write",
00:15:07.354  "zoned": false,
00:15:07.354  "supported_io_types": {
00:15:07.354  "read": true,
00:15:07.354  "write": true,
00:15:07.354  "unmap": true,
00:15:07.354  "flush": true,
00:15:07.354  "reset": true,
00:15:07.354  "nvme_admin": false,
00:15:07.354  "nvme_io": false,
00:15:07.354  "nvme_io_md": false,
00:15:07.354  "write_zeroes": true,
00:15:07.354  "zcopy": true,
00:15:07.354  "get_zone_info": false,
00:15:07.354  "zone_management": false,
00:15:07.354  "zone_append": false,
00:15:07.354  "compare": false,
00:15:07.354  "compare_and_write": false,
00:15:07.354  "abort": true,
00:15:07.354  "seek_hole": false,
00:15:07.354  "seek_data": false,
00:15:07.354  "copy": true,
00:15:07.354  "nvme_iov_md": false
00:15:07.354  },
00:15:07.354  "memory_domains": [
00:15:07.354  {
00:15:07.354  "dma_device_id": "system",
00:15:07.354  "dma_device_type": 1
00:15:07.354  },
00:15:07.354  {
00:15:07.354  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:07.354  "dma_device_type": 2
00:15:07.354  }
00:15:07.354  ],
00:15:07.354  "driver_specific": {}
00:15:07.354  }
00:15:07.354  ]
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:07.354    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:07.354    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:07.354    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.354    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.354    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:07.354    "name": "Existed_Raid",
00:15:07.354    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:07.354    "strip_size_kb": 64,
00:15:07.354    "state": "configuring",
00:15:07.354    "raid_level": "raid5f",
00:15:07.354    "superblock": true,
00:15:07.354    "num_base_bdevs": 3,
00:15:07.354    "num_base_bdevs_discovered": 2,
00:15:07.354    "num_base_bdevs_operational": 3,
00:15:07.354    "base_bdevs_list": [
00:15:07.354      {
00:15:07.354        "name": "BaseBdev1",
00:15:07.354        "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:07.354        "is_configured": true,
00:15:07.354        "data_offset": 2048,
00:15:07.354        "data_size": 63488
00:15:07.354      },
00:15:07.354      {
00:15:07.354        "name": null,
00:15:07.354        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:07.354        "is_configured": false,
00:15:07.354        "data_offset": 0,
00:15:07.354        "data_size": 63488
00:15:07.354      },
00:15:07.354      {
00:15:07.354        "name": "BaseBdev3",
00:15:07.354        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:07.354        "is_configured": true,
00:15:07.354        "data_offset": 2048,
00:15:07.354        "data_size": 63488
00:15:07.354      }
00:15:07.354    ]
00:15:07.354  }'
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:07.354   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.944  [2024-12-16 11:36:33.724850] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:07.944    11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:07.944    "name": "Existed_Raid",
00:15:07.944    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:07.944    "strip_size_kb": 64,
00:15:07.944    "state": "configuring",
00:15:07.944    "raid_level": "raid5f",
00:15:07.944    "superblock": true,
00:15:07.944    "num_base_bdevs": 3,
00:15:07.944    "num_base_bdevs_discovered": 1,
00:15:07.944    "num_base_bdevs_operational": 3,
00:15:07.944    "base_bdevs_list": [
00:15:07.944      {
00:15:07.944        "name": "BaseBdev1",
00:15:07.944        "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:07.944        "is_configured": true,
00:15:07.944        "data_offset": 2048,
00:15:07.944        "data_size": 63488
00:15:07.944      },
00:15:07.944      {
00:15:07.944        "name": null,
00:15:07.944        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:07.944        "is_configured": false,
00:15:07.944        "data_offset": 0,
00:15:07.944        "data_size": 63488
00:15:07.944      },
00:15:07.944      {
00:15:07.944        "name": null,
00:15:07.944        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:07.944        "is_configured": false,
00:15:07.944        "data_offset": 0,
00:15:07.944        "data_size": 63488
00:15:07.944      }
00:15:07.944    ]
00:15:07.944  }'
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:07.944   11:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.220    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:08.220    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:15:08.220    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:08.220    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.220    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.221  [2024-12-16 11:36:34.236012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:08.221   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:08.221    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:08.221    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:08.221    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.221    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:08.221    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:08.481   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:08.481    "name": "Existed_Raid",
00:15:08.481    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:08.481    "strip_size_kb": 64,
00:15:08.481    "state": "configuring",
00:15:08.481    "raid_level": "raid5f",
00:15:08.481    "superblock": true,
00:15:08.481    "num_base_bdevs": 3,
00:15:08.481    "num_base_bdevs_discovered": 2,
00:15:08.481    "num_base_bdevs_operational": 3,
00:15:08.481    "base_bdevs_list": [
00:15:08.481      {
00:15:08.481        "name": "BaseBdev1",
00:15:08.481        "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:08.481        "is_configured": true,
00:15:08.481        "data_offset": 2048,
00:15:08.481        "data_size": 63488
00:15:08.481      },
00:15:08.481      {
00:15:08.481        "name": null,
00:15:08.481        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:08.481        "is_configured": false,
00:15:08.481        "data_offset": 0,
00:15:08.481        "data_size": 63488
00:15:08.481      },
00:15:08.481      {
00:15:08.481        "name": "BaseBdev3",
00:15:08.481        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:08.481        "is_configured": true,
00:15:08.481        "data_offset": 2048,
00:15:08.481        "data_size": 63488
00:15:08.481      }
00:15:08.481    ]
00:15:08.481  }'
00:15:08.481   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:08.481   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.740    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:15:08.740    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:08.740    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:08.740    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.740    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:08.740   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:15:08.740   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.741  [2024-12-16 11:36:34.747141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:08.741   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:08.741    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:08.741    11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:08.741    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:08.741    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:08.741    11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.000   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:09.000    "name": "Existed_Raid",
00:15:09.000    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:09.000    "strip_size_kb": 64,
00:15:09.000    "state": "configuring",
00:15:09.000    "raid_level": "raid5f",
00:15:09.000    "superblock": true,
00:15:09.000    "num_base_bdevs": 3,
00:15:09.000    "num_base_bdevs_discovered": 1,
00:15:09.000    "num_base_bdevs_operational": 3,
00:15:09.000    "base_bdevs_list": [
00:15:09.000      {
00:15:09.000        "name": null,
00:15:09.000        "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:09.000        "is_configured": false,
00:15:09.001        "data_offset": 0,
00:15:09.001        "data_size": 63488
00:15:09.001      },
00:15:09.001      {
00:15:09.001        "name": null,
00:15:09.001        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:09.001        "is_configured": false,
00:15:09.001        "data_offset": 0,
00:15:09.001        "data_size": 63488
00:15:09.001      },
00:15:09.001      {
00:15:09.001        "name": "BaseBdev3",
00:15:09.001        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:09.001        "is_configured": true,
00:15:09.001        "data_offset": 2048,
00:15:09.001        "data_size": 63488
00:15:09.001      }
00:15:09.001    ]
00:15:09.001  }'
00:15:09.001   11:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:09.001   11:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.259    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:15:09.259    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.260  [2024-12-16 11:36:35.157045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.260    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:09.260    "name": "Existed_Raid",
00:15:09.260    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:09.260    "strip_size_kb": 64,
00:15:09.260    "state": "configuring",
00:15:09.260    "raid_level": "raid5f",
00:15:09.260    "superblock": true,
00:15:09.260    "num_base_bdevs": 3,
00:15:09.260    "num_base_bdevs_discovered": 2,
00:15:09.260    "num_base_bdevs_operational": 3,
00:15:09.260    "base_bdevs_list": [
00:15:09.260      {
00:15:09.260        "name": null,
00:15:09.260        "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:09.260        "is_configured": false,
00:15:09.260        "data_offset": 0,
00:15:09.260        "data_size": 63488
00:15:09.260      },
00:15:09.260      {
00:15:09.260        "name": "BaseBdev2",
00:15:09.260        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:09.260        "is_configured": true,
00:15:09.260        "data_offset": 2048,
00:15:09.260        "data_size": 63488
00:15:09.260      },
00:15:09.260      {
00:15:09.260        "name": "BaseBdev3",
00:15:09.260        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:09.260        "is_configured": true,
00:15:09.260        "data_offset": 2048,
00:15:09.260        "data_size": 63488
00:15:09.260      }
00:15:09.260    ]
00:15:09.260  }'
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:09.260   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef38849e-e0e8-4e00-9615-c79f3eccac2d
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.829  [2024-12-16 11:36:35.743079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:15:09.829  [2024-12-16 11:36:35.743272] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:15:09.829  [2024-12-16 11:36:35.743291] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:09.829  [2024-12-16 11:36:35.743580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:15:09.829  NewBaseBdev
00:15:09.829  [2024-12-16 11:36:35.744048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:15:09.829  [2024-12-16 11:36:35.744066] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:15:09.829  [2024-12-16 11:36:35.744177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.829  [
00:15:09.829  {
00:15:09.829  "name": "NewBaseBdev",
00:15:09.829  "aliases": [
00:15:09.829  "ef38849e-e0e8-4e00-9615-c79f3eccac2d"
00:15:09.829  ],
00:15:09.829  "product_name": "Malloc disk",
00:15:09.829  "block_size": 512,
00:15:09.829  "num_blocks": 65536,
00:15:09.829  "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:09.829  "assigned_rate_limits": {
00:15:09.829  "rw_ios_per_sec": 0,
00:15:09.829  "rw_mbytes_per_sec": 0,
00:15:09.829  "r_mbytes_per_sec": 0,
00:15:09.829  "w_mbytes_per_sec": 0
00:15:09.829  },
00:15:09.829  "claimed": true,
00:15:09.829  "claim_type": "exclusive_write",
00:15:09.829  "zoned": false,
00:15:09.829  "supported_io_types": {
00:15:09.829  "read": true,
00:15:09.829  "write": true,
00:15:09.829  "unmap": true,
00:15:09.829  "flush": true,
00:15:09.829  "reset": true,
00:15:09.829  "nvme_admin": false,
00:15:09.829  "nvme_io": false,
00:15:09.829  "nvme_io_md": false,
00:15:09.829  "write_zeroes": true,
00:15:09.829  "zcopy": true,
00:15:09.829  "get_zone_info": false,
00:15:09.829  "zone_management": false,
00:15:09.829  "zone_append": false,
00:15:09.829  "compare": false,
00:15:09.829  "compare_and_write": false,
00:15:09.829  "abort": true,
00:15:09.829  "seek_hole": false,
00:15:09.829  "seek_data": false,
00:15:09.829  "copy": true,
00:15:09.829  "nvme_iov_md": false
00:15:09.829  },
00:15:09.829  "memory_domains": [
00:15:09.829  {
00:15:09.829  "dma_device_id": "system",
00:15:09.829  "dma_device_type": 1
00:15:09.829  },
00:15:09.829  {
00:15:09.829  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:09.829  "dma_device_type": 2
00:15:09.829  }
00:15:09.829  ],
00:15:09.829  "driver_specific": {}
00:15:09.829  }
00:15:09.829  ]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:09.829    11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:09.829    "name": "Existed_Raid",
00:15:09.829    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:09.829    "strip_size_kb": 64,
00:15:09.829    "state": "online",
00:15:09.829    "raid_level": "raid5f",
00:15:09.829    "superblock": true,
00:15:09.829    "num_base_bdevs": 3,
00:15:09.829    "num_base_bdevs_discovered": 3,
00:15:09.829    "num_base_bdevs_operational": 3,
00:15:09.829    "base_bdevs_list": [
00:15:09.829      {
00:15:09.829        "name": "NewBaseBdev",
00:15:09.829        "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:09.829        "is_configured": true,
00:15:09.829        "data_offset": 2048,
00:15:09.829        "data_size": 63488
00:15:09.829      },
00:15:09.829      {
00:15:09.829        "name": "BaseBdev2",
00:15:09.829        "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:09.829        "is_configured": true,
00:15:09.829        "data_offset": 2048,
00:15:09.829        "data_size": 63488
00:15:09.829      },
00:15:09.829      {
00:15:09.829        "name": "BaseBdev3",
00:15:09.829        "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:09.829        "is_configured": true,
00:15:09.829        "data_offset": 2048,
00:15:09.829        "data_size": 63488
00:15:09.829      }
00:15:09.829    ]
00:15:09.829  }'
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:09.829   11:36:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:10.398   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:15:10.398   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:15:10.398   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:15:10.398   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:15:10.398   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:15:10.398   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:15:10.398    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:15:10.398    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:15:10.398    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:10.398    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:10.398  [2024-12-16 11:36:36.242502] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:10.398    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:10.398   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:15:10.398    "name": "Existed_Raid",
00:15:10.398    "aliases": [
00:15:10.398      "544b9133-dfdc-4cb5-9a72-447faba0fa1f"
00:15:10.398    ],
00:15:10.398    "product_name": "Raid Volume",
00:15:10.398    "block_size": 512,
00:15:10.398    "num_blocks": 126976,
00:15:10.398    "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:10.398    "assigned_rate_limits": {
00:15:10.398      "rw_ios_per_sec": 0,
00:15:10.398      "rw_mbytes_per_sec": 0,
00:15:10.398      "r_mbytes_per_sec": 0,
00:15:10.398      "w_mbytes_per_sec": 0
00:15:10.398    },
00:15:10.398    "claimed": false,
00:15:10.398    "zoned": false,
00:15:10.399    "supported_io_types": {
00:15:10.399      "read": true,
00:15:10.399      "write": true,
00:15:10.399      "unmap": false,
00:15:10.399      "flush": false,
00:15:10.399      "reset": true,
00:15:10.399      "nvme_admin": false,
00:15:10.399      "nvme_io": false,
00:15:10.399      "nvme_io_md": false,
00:15:10.399      "write_zeroes": true,
00:15:10.399      "zcopy": false,
00:15:10.399      "get_zone_info": false,
00:15:10.399      "zone_management": false,
00:15:10.399      "zone_append": false,
00:15:10.399      "compare": false,
00:15:10.399      "compare_and_write": false,
00:15:10.399      "abort": false,
00:15:10.399      "seek_hole": false,
00:15:10.399      "seek_data": false,
00:15:10.399      "copy": false,
00:15:10.399      "nvme_iov_md": false
00:15:10.399    },
00:15:10.399    "driver_specific": {
00:15:10.399      "raid": {
00:15:10.399        "uuid": "544b9133-dfdc-4cb5-9a72-447faba0fa1f",
00:15:10.399        "strip_size_kb": 64,
00:15:10.399        "state": "online",
00:15:10.399        "raid_level": "raid5f",
00:15:10.399        "superblock": true,
00:15:10.399        "num_base_bdevs": 3,
00:15:10.399        "num_base_bdevs_discovered": 3,
00:15:10.399        "num_base_bdevs_operational": 3,
00:15:10.399        "base_bdevs_list": [
00:15:10.399          {
00:15:10.399            "name": "NewBaseBdev",
00:15:10.399            "uuid": "ef38849e-e0e8-4e00-9615-c79f3eccac2d",
00:15:10.399            "is_configured": true,
00:15:10.399            "data_offset": 2048,
00:15:10.399            "data_size": 63488
00:15:10.399          },
00:15:10.399          {
00:15:10.399            "name": "BaseBdev2",
00:15:10.399            "uuid": "fd4e989a-dbef-4846-bf6d-b46bee81f363",
00:15:10.399            "is_configured": true,
00:15:10.399            "data_offset": 2048,
00:15:10.399            "data_size": 63488
00:15:10.399          },
00:15:10.399          {
00:15:10.399            "name": "BaseBdev3",
00:15:10.399            "uuid": "75bf7213-9fc5-4aae-82a9-7adc7499d5bc",
00:15:10.399            "is_configured": true,
00:15:10.399            "data_offset": 2048,
00:15:10.399            "data_size": 63488
00:15:10.399          }
00:15:10.399        ]
00:15:10.399      }
00:15:10.399    }
00:15:10.399  }'
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:15:10.399  BaseBdev2
00:15:10.399  BaseBdev3'
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:10.399   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:10.399    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:10.657    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:10.658  [2024-12-16 11:36:36.497863] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:10.658  [2024-12-16 11:36:36.497940] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:10.658  [2024-12-16 11:36:36.498071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:10.658  [2024-12-16 11:36:36.498399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:10.658  [2024-12-16 11:36:36.498464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91424
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91424 ']'
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91424
00:15:10.658    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:15:10.658    11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91424
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:15:10.658  killing process with pid 91424
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91424'
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91424
00:15:10.658  [2024-12-16 11:36:36.544232] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:10.658   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91424
00:15:10.658  [2024-12-16 11:36:36.575589] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:10.918   11:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:15:10.918  
00:15:10.918  real	0m9.068s
00:15:10.918  user	0m15.473s
00:15:10.918  sys	0m1.870s
00:15:10.918   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:15:10.918   11:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:10.918  ************************************
00:15:10.918  END TEST raid5f_state_function_test_sb
00:15:10.918  ************************************
00:15:10.918   11:36:36 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3
00:15:10.919   11:36:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:15:10.919   11:36:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:15:10.919   11:36:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:15:10.919  ************************************
00:15:10.919  START TEST raid5f_superblock_test
00:15:10.919  ************************************
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']'
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=92032
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 92032
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 92032 ']'
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:15:10.919  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:15:10.919   11:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:10.919  [2024-12-16 11:36:36.969713] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:15:10.919  [2024-12-16 11:36:36.969986] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92032 ]
00:15:11.178  [2024-12-16 11:36:37.131305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:11.178  [2024-12-16 11:36:37.177343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:15:11.178  [2024-12-16 11:36:37.219365] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:11.178  [2024-12-16 11:36:37.219403] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:11.747   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.007  malloc1
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.008  [2024-12-16 11:36:37.828738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:12.008  [2024-12-16 11:36:37.828852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:12.008  [2024-12-16 11:36:37.828897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:15:12.008  [2024-12-16 11:36:37.828942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:12.008  [2024-12-16 11:36:37.831017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:12.008  [2024-12-16 11:36:37.831088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:12.008  pt1
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.008  malloc2
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.008  [2024-12-16 11:36:37.865335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:12.008  [2024-12-16 11:36:37.865428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:12.008  [2024-12-16 11:36:37.865460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:15:12.008  [2024-12-16 11:36:37.865488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:12.008  [2024-12-16 11:36:37.867555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:12.008  [2024-12-16 11:36:37.867641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:12.008  pt2
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.008  malloc3
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.008  [2024-12-16 11:36:37.897652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:12.008  [2024-12-16 11:36:37.897738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:12.008  [2024-12-16 11:36:37.897789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:15:12.008  [2024-12-16 11:36:37.897819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:12.008  [2024-12-16 11:36:37.899823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:12.008  [2024-12-16 11:36:37.899909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:12.008  pt3
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.008  [2024-12-16 11:36:37.909679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:12.008  [2024-12-16 11:36:37.911459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:12.008  [2024-12-16 11:36:37.911570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:12.008  [2024-12-16 11:36:37.911756] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:15:12.008  [2024-12-16 11:36:37.911824] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:12.008  [2024-12-16 11:36:37.912076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:15:12.008  [2024-12-16 11:36:37.912567] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:15:12.008  [2024-12-16 11:36:37.912617] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:15:12.008  [2024-12-16 11:36:37.912771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:12.008    11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:12.008    11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:12.008    11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.008    11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.008    11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:12.008    "name": "raid_bdev1",
00:15:12.008    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:12.008    "strip_size_kb": 64,
00:15:12.008    "state": "online",
00:15:12.008    "raid_level": "raid5f",
00:15:12.008    "superblock": true,
00:15:12.008    "num_base_bdevs": 3,
00:15:12.008    "num_base_bdevs_discovered": 3,
00:15:12.008    "num_base_bdevs_operational": 3,
00:15:12.008    "base_bdevs_list": [
00:15:12.008      {
00:15:12.008        "name": "pt1",
00:15:12.008        "uuid": "00000000-0000-0000-0000-000000000001",
00:15:12.008        "is_configured": true,
00:15:12.008        "data_offset": 2048,
00:15:12.008        "data_size": 63488
00:15:12.008      },
00:15:12.008      {
00:15:12.008        "name": "pt2",
00:15:12.008        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:12.008        "is_configured": true,
00:15:12.008        "data_offset": 2048,
00:15:12.008        "data_size": 63488
00:15:12.008      },
00:15:12.008      {
00:15:12.008        "name": "pt3",
00:15:12.008        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:12.008        "is_configured": true,
00:15:12.008        "data_offset": 2048,
00:15:12.008        "data_size": 63488
00:15:12.008      }
00:15:12.008    ]
00:15:12.008  }'
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:12.008   11:36:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.578   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:15:12.578   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:15:12.578   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:15:12.578   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:15:12.578   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:15:12.578   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:15:12.578    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:15:12.578    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:15:12.578    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.578    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.578  [2024-12-16 11:36:38.413523] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:12.578    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.578   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:15:12.578    "name": "raid_bdev1",
00:15:12.578    "aliases": [
00:15:12.578      "b934db85-00c2-44ad-a123-f4f238d5751b"
00:15:12.578    ],
00:15:12.578    "product_name": "Raid Volume",
00:15:12.578    "block_size": 512,
00:15:12.578    "num_blocks": 126976,
00:15:12.578    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:12.578    "assigned_rate_limits": {
00:15:12.578      "rw_ios_per_sec": 0,
00:15:12.578      "rw_mbytes_per_sec": 0,
00:15:12.578      "r_mbytes_per_sec": 0,
00:15:12.578      "w_mbytes_per_sec": 0
00:15:12.578    },
00:15:12.578    "claimed": false,
00:15:12.578    "zoned": false,
00:15:12.578    "supported_io_types": {
00:15:12.578      "read": true,
00:15:12.578      "write": true,
00:15:12.578      "unmap": false,
00:15:12.578      "flush": false,
00:15:12.578      "reset": true,
00:15:12.578      "nvme_admin": false,
00:15:12.578      "nvme_io": false,
00:15:12.578      "nvme_io_md": false,
00:15:12.578      "write_zeroes": true,
00:15:12.578      "zcopy": false,
00:15:12.578      "get_zone_info": false,
00:15:12.578      "zone_management": false,
00:15:12.578      "zone_append": false,
00:15:12.578      "compare": false,
00:15:12.578      "compare_and_write": false,
00:15:12.578      "abort": false,
00:15:12.578      "seek_hole": false,
00:15:12.578      "seek_data": false,
00:15:12.578      "copy": false,
00:15:12.578      "nvme_iov_md": false
00:15:12.578    },
00:15:12.578    "driver_specific": {
00:15:12.578      "raid": {
00:15:12.578        "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:12.578        "strip_size_kb": 64,
00:15:12.578        "state": "online",
00:15:12.578        "raid_level": "raid5f",
00:15:12.578        "superblock": true,
00:15:12.578        "num_base_bdevs": 3,
00:15:12.578        "num_base_bdevs_discovered": 3,
00:15:12.578        "num_base_bdevs_operational": 3,
00:15:12.578        "base_bdevs_list": [
00:15:12.578          {
00:15:12.578            "name": "pt1",
00:15:12.578            "uuid": "00000000-0000-0000-0000-000000000001",
00:15:12.578            "is_configured": true,
00:15:12.578            "data_offset": 2048,
00:15:12.578            "data_size": 63488
00:15:12.578          },
00:15:12.578          {
00:15:12.578            "name": "pt2",
00:15:12.578            "uuid": "00000000-0000-0000-0000-000000000002",
00:15:12.578            "is_configured": true,
00:15:12.578            "data_offset": 2048,
00:15:12.578            "data_size": 63488
00:15:12.578          },
00:15:12.578          {
00:15:12.578            "name": "pt3",
00:15:12.578            "uuid": "00000000-0000-0000-0000-000000000003",
00:15:12.578            "is_configured": true,
00:15:12.578            "data_offset": 2048,
00:15:12.578            "data_size": 63488
00:15:12.578          }
00:15:12.578        ]
00:15:12.578      }
00:15:12.578    }
00:15:12.578  }'
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:15:12.579  pt2
00:15:12.579  pt3'
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:12.579   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.579    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.839  [2024-12-16 11:36:38.696972] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b934db85-00c2-44ad-a123-f4f238d5751b
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b934db85-00c2-44ad-a123-f4f238d5751b ']'
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.839  [2024-12-16 11:36:38.744691] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:12.839  [2024-12-16 11:36:38.744751] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:12.839  [2024-12-16 11:36:38.744871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:12.839  [2024-12-16 11:36:38.744966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:12.839  [2024-12-16 11:36:38.745029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:15:12.839    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.839   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.840    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:15:12.840    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:15:12.840    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.840    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.840    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:12.840    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:12.840  [2024-12-16 11:36:38.892499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:15:12.840  [2024-12-16 11:36:38.894588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:15:12.840  [2024-12-16 11:36:38.894636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:15:12.840  [2024-12-16 11:36:38.894687] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:15:12.840  [2024-12-16 11:36:38.894743] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:15:12.840  [2024-12-16 11:36:38.894778] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:15:12.840  [2024-12-16 11:36:38.894791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:12.840  [2024-12-16 11:36:38.894804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:15:12.840  request:
00:15:12.840  {
00:15:12.840  "name": "raid_bdev1",
00:15:12.840  "raid_level": "raid5f",
00:15:12.840  "base_bdevs": [
00:15:12.840  "malloc1",
00:15:12.840  "malloc2",
00:15:12.840  "malloc3"
00:15:12.840  ],
00:15:12.840  "strip_size_kb": 64,
00:15:12.840  "superblock": false,
00:15:12.840  "method": "bdev_raid_create",
00:15:12.840  "req_id": 1
00:15:12.840  }
00:15:12.840  Got JSON-RPC error response
00:15:12.840  response:
00:15:12.840  {
00:15:12.840  "code": -17,
00:15:12.840  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:15:12.840  }
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:15:12.840   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:15:13.100    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:13.100    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:15:13.100    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.100    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.100    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.100  [2024-12-16 11:36:38.956347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:13.100  [2024-12-16 11:36:38.956438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:13.100  [2024-12-16 11:36:38.956470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:15:13.100  [2024-12-16 11:36:38.956499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:13.100  [2024-12-16 11:36:38.958647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:13.100  [2024-12-16 11:36:38.958718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:13.100  [2024-12-16 11:36:38.958813] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:15:13.100  [2024-12-16 11:36:38.958879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:13.100  pt1
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:13.100   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:13.101   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:13.101   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:13.101   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:13.101   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:13.101   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:13.101   11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:13.101    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:13.101    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.101    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.101    11:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:13.101    11:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.101   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:13.101    "name": "raid_bdev1",
00:15:13.101    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:13.101    "strip_size_kb": 64,
00:15:13.101    "state": "configuring",
00:15:13.101    "raid_level": "raid5f",
00:15:13.101    "superblock": true,
00:15:13.101    "num_base_bdevs": 3,
00:15:13.101    "num_base_bdevs_discovered": 1,
00:15:13.101    "num_base_bdevs_operational": 3,
00:15:13.101    "base_bdevs_list": [
00:15:13.101      {
00:15:13.101        "name": "pt1",
00:15:13.101        "uuid": "00000000-0000-0000-0000-000000000001",
00:15:13.101        "is_configured": true,
00:15:13.101        "data_offset": 2048,
00:15:13.101        "data_size": 63488
00:15:13.101      },
00:15:13.101      {
00:15:13.101        "name": null,
00:15:13.101        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:13.101        "is_configured": false,
00:15:13.101        "data_offset": 2048,
00:15:13.101        "data_size": 63488
00:15:13.101      },
00:15:13.101      {
00:15:13.101        "name": null,
00:15:13.101        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:13.101        "is_configured": false,
00:15:13.101        "data_offset": 2048,
00:15:13.101        "data_size": 63488
00:15:13.101      }
00:15:13.101    ]
00:15:13.101  }'
00:15:13.101   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:13.101   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.360   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']'
00:15:13.360   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:13.360   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.360   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.360  [2024-12-16 11:36:39.419617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:13.360  [2024-12-16 11:36:39.419740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:13.360  [2024-12-16 11:36:39.419780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:15:13.360  [2024-12-16 11:36:39.419821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:13.360  [2024-12-16 11:36:39.420289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:13.360  [2024-12-16 11:36:39.420351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:13.360  [2024-12-16 11:36:39.420465] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:15:13.360  [2024-12-16 11:36:39.420522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:13.360  pt2
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.620  [2024-12-16 11:36:39.431605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:13.620    11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:13.620    11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:13.620    11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.620    11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.620    11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:13.620    "name": "raid_bdev1",
00:15:13.620    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:13.620    "strip_size_kb": 64,
00:15:13.620    "state": "configuring",
00:15:13.620    "raid_level": "raid5f",
00:15:13.620    "superblock": true,
00:15:13.620    "num_base_bdevs": 3,
00:15:13.620    "num_base_bdevs_discovered": 1,
00:15:13.620    "num_base_bdevs_operational": 3,
00:15:13.620    "base_bdevs_list": [
00:15:13.620      {
00:15:13.620        "name": "pt1",
00:15:13.620        "uuid": "00000000-0000-0000-0000-000000000001",
00:15:13.620        "is_configured": true,
00:15:13.620        "data_offset": 2048,
00:15:13.620        "data_size": 63488
00:15:13.620      },
00:15:13.620      {
00:15:13.620        "name": null,
00:15:13.620        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:13.620        "is_configured": false,
00:15:13.620        "data_offset": 0,
00:15:13.620        "data_size": 63488
00:15:13.620      },
00:15:13.620      {
00:15:13.620        "name": null,
00:15:13.620        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:13.620        "is_configured": false,
00:15:13.620        "data_offset": 2048,
00:15:13.620        "data_size": 63488
00:15:13.620      }
00:15:13.620    ]
00:15:13.620  }'
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:13.620   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.880   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:15:13.880   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:15:13.880   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:13.880   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.880   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.880  [2024-12-16 11:36:39.874835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:13.880  [2024-12-16 11:36:39.874965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:13.880  [2024-12-16 11:36:39.875002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:15:13.880  [2024-12-16 11:36:39.875030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:13.880  [2024-12-16 11:36:39.875475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:13.880  [2024-12-16 11:36:39.875547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:13.880  [2024-12-16 11:36:39.875661] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:15:13.881  [2024-12-16 11:36:39.875719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:13.881  pt2
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.881  [2024-12-16 11:36:39.886783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:13.881  [2024-12-16 11:36:39.886861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:13.881  [2024-12-16 11:36:39.886895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:15:13.881  [2024-12-16 11:36:39.886928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:13.881  [2024-12-16 11:36:39.887284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:13.881  [2024-12-16 11:36:39.887338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:13.881  [2024-12-16 11:36:39.887418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:15:13.881  [2024-12-16 11:36:39.887461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:13.881  [2024-12-16 11:36:39.887604] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:15:13.881  [2024-12-16 11:36:39.887645] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:13.881  [2024-12-16 11:36:39.887885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:15:13.881  [2024-12-16 11:36:39.888325] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:15:13.881  [2024-12-16 11:36:39.888376] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:15:13.881  [2024-12-16 11:36:39.888512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:13.881  pt3
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:13.881   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:13.881    11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:13.881    11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:13.881    11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:13.881    11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:13.881    11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.140   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:14.140    "name": "raid_bdev1",
00:15:14.140    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:14.140    "strip_size_kb": 64,
00:15:14.140    "state": "online",
00:15:14.140    "raid_level": "raid5f",
00:15:14.140    "superblock": true,
00:15:14.140    "num_base_bdevs": 3,
00:15:14.140    "num_base_bdevs_discovered": 3,
00:15:14.140    "num_base_bdevs_operational": 3,
00:15:14.140    "base_bdevs_list": [
00:15:14.140      {
00:15:14.140        "name": "pt1",
00:15:14.140        "uuid": "00000000-0000-0000-0000-000000000001",
00:15:14.140        "is_configured": true,
00:15:14.140        "data_offset": 2048,
00:15:14.140        "data_size": 63488
00:15:14.140      },
00:15:14.140      {
00:15:14.140        "name": "pt2",
00:15:14.140        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:14.140        "is_configured": true,
00:15:14.140        "data_offset": 2048,
00:15:14.140        "data_size": 63488
00:15:14.140      },
00:15:14.140      {
00:15:14.140        "name": "pt3",
00:15:14.140        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:14.140        "is_configured": true,
00:15:14.140        "data_offset": 2048,
00:15:14.140        "data_size": 63488
00:15:14.140      }
00:15:14.140    ]
00:15:14.140  }'
00:15:14.140   11:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:14.140   11:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.400  [2024-12-16 11:36:40.306284] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:15:14.400    "name": "raid_bdev1",
00:15:14.400    "aliases": [
00:15:14.400      "b934db85-00c2-44ad-a123-f4f238d5751b"
00:15:14.400    ],
00:15:14.400    "product_name": "Raid Volume",
00:15:14.400    "block_size": 512,
00:15:14.400    "num_blocks": 126976,
00:15:14.400    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:14.400    "assigned_rate_limits": {
00:15:14.400      "rw_ios_per_sec": 0,
00:15:14.400      "rw_mbytes_per_sec": 0,
00:15:14.400      "r_mbytes_per_sec": 0,
00:15:14.400      "w_mbytes_per_sec": 0
00:15:14.400    },
00:15:14.400    "claimed": false,
00:15:14.400    "zoned": false,
00:15:14.400    "supported_io_types": {
00:15:14.400      "read": true,
00:15:14.400      "write": true,
00:15:14.400      "unmap": false,
00:15:14.400      "flush": false,
00:15:14.400      "reset": true,
00:15:14.400      "nvme_admin": false,
00:15:14.400      "nvme_io": false,
00:15:14.400      "nvme_io_md": false,
00:15:14.400      "write_zeroes": true,
00:15:14.400      "zcopy": false,
00:15:14.400      "get_zone_info": false,
00:15:14.400      "zone_management": false,
00:15:14.400      "zone_append": false,
00:15:14.400      "compare": false,
00:15:14.400      "compare_and_write": false,
00:15:14.400      "abort": false,
00:15:14.400      "seek_hole": false,
00:15:14.400      "seek_data": false,
00:15:14.400      "copy": false,
00:15:14.400      "nvme_iov_md": false
00:15:14.400    },
00:15:14.400    "driver_specific": {
00:15:14.400      "raid": {
00:15:14.400        "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:14.400        "strip_size_kb": 64,
00:15:14.400        "state": "online",
00:15:14.400        "raid_level": "raid5f",
00:15:14.400        "superblock": true,
00:15:14.400        "num_base_bdevs": 3,
00:15:14.400        "num_base_bdevs_discovered": 3,
00:15:14.400        "num_base_bdevs_operational": 3,
00:15:14.400        "base_bdevs_list": [
00:15:14.400          {
00:15:14.400            "name": "pt1",
00:15:14.400            "uuid": "00000000-0000-0000-0000-000000000001",
00:15:14.400            "is_configured": true,
00:15:14.400            "data_offset": 2048,
00:15:14.400            "data_size": 63488
00:15:14.400          },
00:15:14.400          {
00:15:14.400            "name": "pt2",
00:15:14.400            "uuid": "00000000-0000-0000-0000-000000000002",
00:15:14.400            "is_configured": true,
00:15:14.400            "data_offset": 2048,
00:15:14.400            "data_size": 63488
00:15:14.400          },
00:15:14.400          {
00:15:14.400            "name": "pt3",
00:15:14.400            "uuid": "00000000-0000-0000-0000-000000000003",
00:15:14.400            "is_configured": true,
00:15:14.400            "data_offset": 2048,
00:15:14.400            "data_size": 63488
00:15:14.400          }
00:15:14.400        ]
00:15:14.400      }
00:15:14.400    }
00:15:14.400  }'
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:15:14.400  pt2
00:15:14.400  pt3'
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:15:14.400   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.400    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.660  [2024-12-16 11:36:40.557889] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b934db85-00c2-44ad-a123-f4f238d5751b '!=' b934db85-00c2-44ad-a123-f4f238d5751b ']'
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.660  [2024-12-16 11:36:40.605683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:14.660    11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:14.660   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:14.660    "name": "raid_bdev1",
00:15:14.660    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:14.661    "strip_size_kb": 64,
00:15:14.661    "state": "online",
00:15:14.661    "raid_level": "raid5f",
00:15:14.661    "superblock": true,
00:15:14.661    "num_base_bdevs": 3,
00:15:14.661    "num_base_bdevs_discovered": 2,
00:15:14.661    "num_base_bdevs_operational": 2,
00:15:14.661    "base_bdevs_list": [
00:15:14.661      {
00:15:14.661        "name": null,
00:15:14.661        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:14.661        "is_configured": false,
00:15:14.661        "data_offset": 0,
00:15:14.661        "data_size": 63488
00:15:14.661      },
00:15:14.661      {
00:15:14.661        "name": "pt2",
00:15:14.661        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:14.661        "is_configured": true,
00:15:14.661        "data_offset": 2048,
00:15:14.661        "data_size": 63488
00:15:14.661      },
00:15:14.661      {
00:15:14.661        "name": "pt3",
00:15:14.661        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:14.661        "is_configured": true,
00:15:14.661        "data_offset": 2048,
00:15:14.661        "data_size": 63488
00:15:14.661      }
00:15:14.661    ]
00:15:14.661  }'
00:15:14.661   11:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:14.661   11:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.230  [2024-12-16 11:36:41.052853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:15.230  [2024-12-16 11:36:41.052933] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:15.230  [2024-12-16 11:36:41.053031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:15.230  [2024-12-16 11:36:41.053125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:15.230  [2024-12-16 11:36:41.053176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.230  [2024-12-16 11:36:41.140668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:15.230  [2024-12-16 11:36:41.140756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:15.230  [2024-12-16 11:36:41.140812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:15:15.230  [2024-12-16 11:36:41.140846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:15.230  [2024-12-16 11:36:41.143130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:15.230  [2024-12-16 11:36:41.143200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:15.230  [2024-12-16 11:36:41.143326] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:15:15.230  [2024-12-16 11:36:41.143397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:15.230  pt2
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.230    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:15.230    "name": "raid_bdev1",
00:15:15.230    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:15.230    "strip_size_kb": 64,
00:15:15.230    "state": "configuring",
00:15:15.230    "raid_level": "raid5f",
00:15:15.230    "superblock": true,
00:15:15.230    "num_base_bdevs": 3,
00:15:15.230    "num_base_bdevs_discovered": 1,
00:15:15.230    "num_base_bdevs_operational": 2,
00:15:15.230    "base_bdevs_list": [
00:15:15.230      {
00:15:15.230        "name": null,
00:15:15.230        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:15.230        "is_configured": false,
00:15:15.230        "data_offset": 2048,
00:15:15.230        "data_size": 63488
00:15:15.230      },
00:15:15.230      {
00:15:15.230        "name": "pt2",
00:15:15.230        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:15.230        "is_configured": true,
00:15:15.230        "data_offset": 2048,
00:15:15.230        "data_size": 63488
00:15:15.230      },
00:15:15.230      {
00:15:15.230        "name": null,
00:15:15.230        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:15.230        "is_configured": false,
00:15:15.230        "data_offset": 2048,
00:15:15.230        "data_size": 63488
00:15:15.230      }
00:15:15.230    ]
00:15:15.230  }'
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:15.230   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.490   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ ))
00:15:15.490   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:15:15.490   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2
00:15:15.490   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:15.490   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.490   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.749  [2024-12-16 11:36:41.555980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:15.750  [2024-12-16 11:36:41.556083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:15.750  [2024-12-16 11:36:41.556124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:15:15.750  [2024-12-16 11:36:41.556152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:15.750  [2024-12-16 11:36:41.556589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:15.750  [2024-12-16 11:36:41.556644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:15.750  [2024-12-16 11:36:41.556749] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:15:15.750  [2024-12-16 11:36:41.556809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:15.750  [2024-12-16 11:36:41.556939] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:15:15.750  [2024-12-16 11:36:41.556975] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:15.750  [2024-12-16 11:36:41.557227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:15:15.750  [2024-12-16 11:36:41.557732] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:15:15.750  [2024-12-16 11:36:41.557785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:15:15.750  [2024-12-16 11:36:41.558039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:15.750  pt3
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:15.750    11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:15.750    11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:15.750    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:15.750    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:15.750    11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:15.750    "name": "raid_bdev1",
00:15:15.750    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:15.750    "strip_size_kb": 64,
00:15:15.750    "state": "online",
00:15:15.750    "raid_level": "raid5f",
00:15:15.750    "superblock": true,
00:15:15.750    "num_base_bdevs": 3,
00:15:15.750    "num_base_bdevs_discovered": 2,
00:15:15.750    "num_base_bdevs_operational": 2,
00:15:15.750    "base_bdevs_list": [
00:15:15.750      {
00:15:15.750        "name": null,
00:15:15.750        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:15.750        "is_configured": false,
00:15:15.750        "data_offset": 2048,
00:15:15.750        "data_size": 63488
00:15:15.750      },
00:15:15.750      {
00:15:15.750        "name": "pt2",
00:15:15.750        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:15.750        "is_configured": true,
00:15:15.750        "data_offset": 2048,
00:15:15.750        "data_size": 63488
00:15:15.750      },
00:15:15.750      {
00:15:15.750        "name": "pt3",
00:15:15.750        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:15.750        "is_configured": true,
00:15:15.750        "data_offset": 2048,
00:15:15.750        "data_size": 63488
00:15:15.750      }
00:15:15.750    ]
00:15:15.750  }'
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:15.750   11:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.009   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:15:16.009   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.009   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.009  [2024-12-16 11:36:42.023290] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:16.009  [2024-12-16 11:36:42.023325] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:16.009  [2024-12-16 11:36:42.023400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:16.009  [2024-12-16 11:36:42.023458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:16.009  [2024-12-16 11:36:42.023470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:15:16.009   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.009    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:16.009    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.009    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.010    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:15:16.010    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.010   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:15:16.010   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:15:16.010   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']'
00:15:16.010   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2
00:15:16.010   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3
00:15:16.010   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.010   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.270  [2024-12-16 11:36:42.083157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:16.270  [2024-12-16 11:36:42.083283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:16.270  [2024-12-16 11:36:42.083323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:15:16.270  [2024-12-16 11:36:42.083357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:16.270  [2024-12-16 11:36:42.085690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:16.270  [2024-12-16 11:36:42.085757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:16.270  [2024-12-16 11:36:42.085872] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:15:16.270  [2024-12-16 11:36:42.085940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:16.270  [2024-12-16 11:36:42.086097] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:15:16.270  [2024-12-16 11:36:42.086170] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:16.270  [2024-12-16 11:36:42.086209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:15:16.270  [2024-12-16 11:36:42.086298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:16.270  pt1
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']'
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:16.270    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:16.270    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.270    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.270    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:16.270    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:16.270    "name": "raid_bdev1",
00:15:16.270    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:16.270    "strip_size_kb": 64,
00:15:16.270    "state": "configuring",
00:15:16.270    "raid_level": "raid5f",
00:15:16.270    "superblock": true,
00:15:16.270    "num_base_bdevs": 3,
00:15:16.270    "num_base_bdevs_discovered": 1,
00:15:16.270    "num_base_bdevs_operational": 2,
00:15:16.270    "base_bdevs_list": [
00:15:16.270      {
00:15:16.270        "name": null,
00:15:16.270        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:16.270        "is_configured": false,
00:15:16.270        "data_offset": 2048,
00:15:16.270        "data_size": 63488
00:15:16.270      },
00:15:16.270      {
00:15:16.270        "name": "pt2",
00:15:16.270        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:16.270        "is_configured": true,
00:15:16.270        "data_offset": 2048,
00:15:16.270        "data_size": 63488
00:15:16.270      },
00:15:16.270      {
00:15:16.270        "name": null,
00:15:16.270        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:16.270        "is_configured": false,
00:15:16.270        "data_offset": 2048,
00:15:16.270        "data_size": 63488
00:15:16.270      }
00:15:16.270    ]
00:15:16.270  }'
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:16.270   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.530    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring
00:15:16.530    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:15:16.530    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.530    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.530    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.530   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]]
00:15:16.530   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:16.530   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.530   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.530  [2024-12-16 11:36:42.554372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:16.530  [2024-12-16 11:36:42.554486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:16.530  [2024-12-16 11:36:42.554526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:15:16.530  [2024-12-16 11:36:42.554582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:16.530  [2024-12-16 11:36:42.555111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:16.530  [2024-12-16 11:36:42.555185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:16.530  [2024-12-16 11:36:42.555318] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:15:16.530  [2024-12-16 11:36:42.555387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:16.530  [2024-12-16 11:36:42.555527] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:15:16.530  [2024-12-16 11:36:42.555588] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:16.530  [2024-12-16 11:36:42.555876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:15:16.530  [2024-12-16 11:36:42.556486] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:15:16.530  [2024-12-16 11:36:42.556562] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:15:16.530  [2024-12-16 11:36:42.556747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:16.530  pt3
00:15:16.530   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.530   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:16.530   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:16.531   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:16.531    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:16.531    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:16.531    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:16.531    11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:16.531    11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:16.790   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:16.790    "name": "raid_bdev1",
00:15:16.790    "uuid": "b934db85-00c2-44ad-a123-f4f238d5751b",
00:15:16.790    "strip_size_kb": 64,
00:15:16.790    "state": "online",
00:15:16.790    "raid_level": "raid5f",
00:15:16.790    "superblock": true,
00:15:16.790    "num_base_bdevs": 3,
00:15:16.790    "num_base_bdevs_discovered": 2,
00:15:16.790    "num_base_bdevs_operational": 2,
00:15:16.790    "base_bdevs_list": [
00:15:16.790      {
00:15:16.790        "name": null,
00:15:16.790        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:16.790        "is_configured": false,
00:15:16.790        "data_offset": 2048,
00:15:16.790        "data_size": 63488
00:15:16.790      },
00:15:16.790      {
00:15:16.790        "name": "pt2",
00:15:16.790        "uuid": "00000000-0000-0000-0000-000000000002",
00:15:16.790        "is_configured": true,
00:15:16.790        "data_offset": 2048,
00:15:16.790        "data_size": 63488
00:15:16.790      },
00:15:16.790      {
00:15:16.790        "name": "pt3",
00:15:16.790        "uuid": "00000000-0000-0000-0000-000000000003",
00:15:16.790        "is_configured": true,
00:15:16.790        "data_offset": 2048,
00:15:16.790        "data_size": 63488
00:15:16.790      }
00:15:16.790    ]
00:15:16.790  }'
00:15:16.790   11:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:16.790   11:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:17.050   11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:17.050    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:17.050  [2024-12-16 11:36:43.101691] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:17.310    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b934db85-00c2-44ad-a123-f4f238d5751b '!=' b934db85-00c2-44ad-a123-f4f238d5751b ']'
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 92032
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 92032 ']'
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 92032
00:15:17.310    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:15:17.310    11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92032
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:15:17.310  killing process with pid 92032
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92032'
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 92032
00:15:17.310  [2024-12-16 11:36:43.167356] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:17.310  [2024-12-16 11:36:43.167445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:17.310  [2024-12-16 11:36:43.167518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:17.310  [2024-12-16 11:36:43.167529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:15:17.310   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 92032
00:15:17.310  [2024-12-16 11:36:43.200749] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:17.570   11:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:15:17.570  
00:15:17.570  real	0m6.558s
00:15:17.570  user	0m11.001s
00:15:17.570  sys	0m1.422s
00:15:17.570  ************************************
00:15:17.570  END TEST raid5f_superblock_test
00:15:17.570  ************************************
00:15:17.570   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:15:17.570   11:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:15:17.570   11:36:43 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']'
00:15:17.570   11:36:43 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true
00:15:17.570   11:36:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:15:17.570   11:36:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:15:17.570   11:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:15:17.570  ************************************
00:15:17.570  START TEST raid5f_rebuild_test
00:15:17.570  ************************************
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:15:17.570    11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']'
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']'
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64'
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']'
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92459
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92459
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92459 ']'
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:17.570  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:15:17.570   11:36:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:17.570  I/O size of 3145728 is greater than zero copy threshold (65536).
00:15:17.570  Zero copy mechanism will not be used.
00:15:17.570  [2024-12-16 11:36:43.614644] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:15:17.570  [2024-12-16 11:36:43.614779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92459 ]
00:15:17.837  [2024-12-16 11:36:43.773615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:17.837  [2024-12-16 11:36:43.818002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:15:17.837  [2024-12-16 11:36:43.859164] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:17.837  [2024-12-16 11:36:43.859297] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.415  BaseBdev1_malloc
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.415  [2024-12-16 11:36:44.468681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:15:18.415  [2024-12-16 11:36:44.468811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:18.415  [2024-12-16 11:36:44.468858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:15:18.415  [2024-12-16 11:36:44.468898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:18.415  [2024-12-16 11:36:44.471026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:18.415  [2024-12-16 11:36:44.471097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:15:18.415  BaseBdev1
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.415   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.675  BaseBdev2_malloc
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.675  [2024-12-16 11:36:44.506474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:15:18.675  [2024-12-16 11:36:44.506578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:18.675  [2024-12-16 11:36:44.506634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:15:18.675  [2024-12-16 11:36:44.506665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:18.675  [2024-12-16 11:36:44.508753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:18.675  [2024-12-16 11:36:44.508821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:15:18.675  BaseBdev2
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:15:18.675   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.676  BaseBdev3_malloc
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.676  [2024-12-16 11:36:44.535047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:15:18.676  [2024-12-16 11:36:44.535135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:18.676  [2024-12-16 11:36:44.535194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:15:18.676  [2024-12-16 11:36:44.535224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:18.676  [2024-12-16 11:36:44.537298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:18.676  [2024-12-16 11:36:44.537367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:15:18.676  BaseBdev3
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.676  spare_malloc
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.676  spare_delay
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.676  [2024-12-16 11:36:44.575562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:15:18.676  [2024-12-16 11:36:44.575649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:18.676  [2024-12-16 11:36:44.575691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:15:18.676  [2024-12-16 11:36:44.575719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:18.676  [2024-12-16 11:36:44.577825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:18.676  [2024-12-16 11:36:44.577888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:15:18.676  spare
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.676  [2024-12-16 11:36:44.587601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:18.676  [2024-12-16 11:36:44.589600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:18.676  [2024-12-16 11:36:44.589667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:18.676  [2024-12-16 11:36:44.589743] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:15:18.676  [2024-12-16 11:36:44.589754] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:15:18.676  [2024-12-16 11:36:44.590005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:15:18.676  [2024-12-16 11:36:44.590404] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:15:18.676  [2024-12-16 11:36:44.590416] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:15:18.676  [2024-12-16 11:36:44.590546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:18.676    11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:18.676    11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:18.676    11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:18.676    11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:18.676    11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:18.676    "name": "raid_bdev1",
00:15:18.676    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:18.676    "strip_size_kb": 64,
00:15:18.676    "state": "online",
00:15:18.676    "raid_level": "raid5f",
00:15:18.676    "superblock": false,
00:15:18.676    "num_base_bdevs": 3,
00:15:18.676    "num_base_bdevs_discovered": 3,
00:15:18.676    "num_base_bdevs_operational": 3,
00:15:18.676    "base_bdevs_list": [
00:15:18.676      {
00:15:18.676        "name": "BaseBdev1",
00:15:18.676        "uuid": "a84e08db-5996-5061-a8f9-97e05137a007",
00:15:18.676        "is_configured": true,
00:15:18.676        "data_offset": 0,
00:15:18.676        "data_size": 65536
00:15:18.676      },
00:15:18.676      {
00:15:18.676        "name": "BaseBdev2",
00:15:18.676        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:18.676        "is_configured": true,
00:15:18.676        "data_offset": 0,
00:15:18.676        "data_size": 65536
00:15:18.676      },
00:15:18.676      {
00:15:18.676        "name": "BaseBdev3",
00:15:18.676        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:18.676        "is_configured": true,
00:15:18.676        "data_offset": 0,
00:15:18.676        "data_size": 65536
00:15:18.676      }
00:15:18.676    ]
00:15:18.676  }'
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:18.676   11:36:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:19.245  [2024-12-16 11:36:45.015411] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:15:19.245   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:15:19.245  [2024-12-16 11:36:45.270836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:15:19.245  /dev/nbd0
00:15:19.245    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:19.504  1+0 records in
00:15:19.504  1+0 records out
00:15:19.504  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392193 s, 10.4 MB/s
00:15:19.504    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']'
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128
00:15:19.504   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct
00:15:19.762  512+0 records in
00:15:19.762  512+0 records out
00:15:19.762  67108864 bytes (67 MB, 64 MiB) copied, 0.29181 s, 230 MB/s
00:15:19.762   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:15:19.762   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:15:19.762   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:15:19.762   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:15:19.762   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:15:19.762   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:19.762   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:15:20.022    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:15:20.022  [2024-12-16 11:36:45.853787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:20.022  [2024-12-16 11:36:45.869867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:20.022    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:20.022    11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:20.022    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:20.022    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:20.022    11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:20.022    "name": "raid_bdev1",
00:15:20.022    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:20.022    "strip_size_kb": 64,
00:15:20.022    "state": "online",
00:15:20.022    "raid_level": "raid5f",
00:15:20.022    "superblock": false,
00:15:20.022    "num_base_bdevs": 3,
00:15:20.022    "num_base_bdevs_discovered": 2,
00:15:20.022    "num_base_bdevs_operational": 2,
00:15:20.022    "base_bdevs_list": [
00:15:20.022      {
00:15:20.022        "name": null,
00:15:20.022        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:20.022        "is_configured": false,
00:15:20.022        "data_offset": 0,
00:15:20.022        "data_size": 65536
00:15:20.022      },
00:15:20.022      {
00:15:20.022        "name": "BaseBdev2",
00:15:20.022        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:20.022        "is_configured": true,
00:15:20.022        "data_offset": 0,
00:15:20.022        "data_size": 65536
00:15:20.022      },
00:15:20.022      {
00:15:20.022        "name": "BaseBdev3",
00:15:20.022        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:20.022        "is_configured": true,
00:15:20.022        "data_offset": 0,
00:15:20.022        "data_size": 65536
00:15:20.022      }
00:15:20.022    ]
00:15:20.022  }'
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:20.022   11:36:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:20.281   11:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:15:20.281   11:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:20.281   11:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:20.281  [2024-12-16 11:36:46.313132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:20.281  [2024-12-16 11:36:46.317168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0
00:15:20.281   11:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:20.281   11:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1
00:15:20.282  [2024-12-16 11:36:46.319612] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:21.663    "name": "raid_bdev1",
00:15:21.663    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:21.663    "strip_size_kb": 64,
00:15:21.663    "state": "online",
00:15:21.663    "raid_level": "raid5f",
00:15:21.663    "superblock": false,
00:15:21.663    "num_base_bdevs": 3,
00:15:21.663    "num_base_bdevs_discovered": 3,
00:15:21.663    "num_base_bdevs_operational": 3,
00:15:21.663    "process": {
00:15:21.663      "type": "rebuild",
00:15:21.663      "target": "spare",
00:15:21.663      "progress": {
00:15:21.663        "blocks": 20480,
00:15:21.663        "percent": 15
00:15:21.663      }
00:15:21.663    },
00:15:21.663    "base_bdevs_list": [
00:15:21.663      {
00:15:21.663        "name": "spare",
00:15:21.663        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:21.663        "is_configured": true,
00:15:21.663        "data_offset": 0,
00:15:21.663        "data_size": 65536
00:15:21.663      },
00:15:21.663      {
00:15:21.663        "name": "BaseBdev2",
00:15:21.663        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:21.663        "is_configured": true,
00:15:21.663        "data_offset": 0,
00:15:21.663        "data_size": 65536
00:15:21.663      },
00:15:21.663      {
00:15:21.663        "name": "BaseBdev3",
00:15:21.663        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:21.663        "is_configured": true,
00:15:21.663        "data_offset": 0,
00:15:21.663        "data_size": 65536
00:15:21.663      }
00:15:21.663    ]
00:15:21.663  }'
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:21.663  [2024-12-16 11:36:47.484451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:21.663  [2024-12-16 11:36:47.529297] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:15:21.663  [2024-12-16 11:36:47.529428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:21.663  [2024-12-16 11:36:47.529473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:21.663  [2024-12-16 11:36:47.529515] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:21.663    11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:21.663    "name": "raid_bdev1",
00:15:21.663    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:21.663    "strip_size_kb": 64,
00:15:21.663    "state": "online",
00:15:21.663    "raid_level": "raid5f",
00:15:21.663    "superblock": false,
00:15:21.663    "num_base_bdevs": 3,
00:15:21.663    "num_base_bdevs_discovered": 2,
00:15:21.663    "num_base_bdevs_operational": 2,
00:15:21.663    "base_bdevs_list": [
00:15:21.663      {
00:15:21.663        "name": null,
00:15:21.663        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:21.663        "is_configured": false,
00:15:21.663        "data_offset": 0,
00:15:21.663        "data_size": 65536
00:15:21.663      },
00:15:21.663      {
00:15:21.663        "name": "BaseBdev2",
00:15:21.663        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:21.663        "is_configured": true,
00:15:21.663        "data_offset": 0,
00:15:21.663        "data_size": 65536
00:15:21.663      },
00:15:21.663      {
00:15:21.663        "name": "BaseBdev3",
00:15:21.663        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:21.663        "is_configured": true,
00:15:21.663        "data_offset": 0,
00:15:21.663        "data_size": 65536
00:15:21.663      }
00:15:21.663    ]
00:15:21.663  }'
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:21.663   11:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:22.233    11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:22.233    11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:22.233    11:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:22.233    11:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:22.233    11:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:22.233    "name": "raid_bdev1",
00:15:22.233    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:22.233    "strip_size_kb": 64,
00:15:22.233    "state": "online",
00:15:22.233    "raid_level": "raid5f",
00:15:22.233    "superblock": false,
00:15:22.233    "num_base_bdevs": 3,
00:15:22.233    "num_base_bdevs_discovered": 2,
00:15:22.233    "num_base_bdevs_operational": 2,
00:15:22.233    "base_bdevs_list": [
00:15:22.233      {
00:15:22.233        "name": null,
00:15:22.233        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:22.233        "is_configured": false,
00:15:22.233        "data_offset": 0,
00:15:22.233        "data_size": 65536
00:15:22.233      },
00:15:22.233      {
00:15:22.233        "name": "BaseBdev2",
00:15:22.233        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:22.233        "is_configured": true,
00:15:22.233        "data_offset": 0,
00:15:22.233        "data_size": 65536
00:15:22.233      },
00:15:22.233      {
00:15:22.233        "name": "BaseBdev3",
00:15:22.233        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:22.233        "is_configured": true,
00:15:22.233        "data_offset": 0,
00:15:22.233        "data_size": 65536
00:15:22.233      }
00:15:22.233    ]
00:15:22.233  }'
00:15:22.233    11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:22.233    11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:22.233  [2024-12-16 11:36:48.118242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:22.233  [2024-12-16 11:36:48.122127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:22.233   11:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1
00:15:22.233  [2024-12-16 11:36:48.124346] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:15:23.173   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:23.173   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:23.173   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:23.173   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:23.173   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:23.173    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:23.174    11:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:23.174    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:23.174    11:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:23.174    11:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:23.174   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:23.174    "name": "raid_bdev1",
00:15:23.174    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:23.174    "strip_size_kb": 64,
00:15:23.174    "state": "online",
00:15:23.174    "raid_level": "raid5f",
00:15:23.174    "superblock": false,
00:15:23.174    "num_base_bdevs": 3,
00:15:23.174    "num_base_bdevs_discovered": 3,
00:15:23.174    "num_base_bdevs_operational": 3,
00:15:23.174    "process": {
00:15:23.174      "type": "rebuild",
00:15:23.174      "target": "spare",
00:15:23.174      "progress": {
00:15:23.174        "blocks": 20480,
00:15:23.174        "percent": 15
00:15:23.174      }
00:15:23.174    },
00:15:23.174    "base_bdevs_list": [
00:15:23.174      {
00:15:23.174        "name": "spare",
00:15:23.174        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:23.174        "is_configured": true,
00:15:23.174        "data_offset": 0,
00:15:23.174        "data_size": 65536
00:15:23.174      },
00:15:23.174      {
00:15:23.174        "name": "BaseBdev2",
00:15:23.174        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:23.174        "is_configured": true,
00:15:23.174        "data_offset": 0,
00:15:23.174        "data_size": 65536
00:15:23.174      },
00:15:23.174      {
00:15:23.174        "name": "BaseBdev3",
00:15:23.174        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:23.174        "is_configured": true,
00:15:23.174        "data_offset": 0,
00:15:23.174        "data_size": 65536
00:15:23.174      }
00:15:23.174    ]
00:15:23.174  }'
00:15:23.174    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:23.174   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:23.174    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']'
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']'
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=462
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:23.434    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:23.434    11:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:23.434    11:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:23.434    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:23.434    11:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:23.434    "name": "raid_bdev1",
00:15:23.434    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:23.434    "strip_size_kb": 64,
00:15:23.434    "state": "online",
00:15:23.434    "raid_level": "raid5f",
00:15:23.434    "superblock": false,
00:15:23.434    "num_base_bdevs": 3,
00:15:23.434    "num_base_bdevs_discovered": 3,
00:15:23.434    "num_base_bdevs_operational": 3,
00:15:23.434    "process": {
00:15:23.434      "type": "rebuild",
00:15:23.434      "target": "spare",
00:15:23.434      "progress": {
00:15:23.434        "blocks": 22528,
00:15:23.434        "percent": 17
00:15:23.434      }
00:15:23.434    },
00:15:23.434    "base_bdevs_list": [
00:15:23.434      {
00:15:23.434        "name": "spare",
00:15:23.434        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:23.434        "is_configured": true,
00:15:23.434        "data_offset": 0,
00:15:23.434        "data_size": 65536
00:15:23.434      },
00:15:23.434      {
00:15:23.434        "name": "BaseBdev2",
00:15:23.434        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:23.434        "is_configured": true,
00:15:23.434        "data_offset": 0,
00:15:23.434        "data_size": 65536
00:15:23.434      },
00:15:23.434      {
00:15:23.434        "name": "BaseBdev3",
00:15:23.434        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:23.434        "is_configured": true,
00:15:23.434        "data_offset": 0,
00:15:23.434        "data_size": 65536
00:15:23.434      }
00:15:23.434    ]
00:15:23.434  }'
00:15:23.434    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:23.434    11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:23.434   11:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:24.373   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:24.373   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:24.373   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:24.373   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:24.373   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:24.373   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:24.373    11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:24.373    11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:24.373    11:36:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:24.373    11:36:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:24.633    11:36:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:24.633   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:24.633    "name": "raid_bdev1",
00:15:24.633    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:24.633    "strip_size_kb": 64,
00:15:24.633    "state": "online",
00:15:24.633    "raid_level": "raid5f",
00:15:24.633    "superblock": false,
00:15:24.633    "num_base_bdevs": 3,
00:15:24.633    "num_base_bdevs_discovered": 3,
00:15:24.633    "num_base_bdevs_operational": 3,
00:15:24.633    "process": {
00:15:24.633      "type": "rebuild",
00:15:24.633      "target": "spare",
00:15:24.633      "progress": {
00:15:24.633        "blocks": 45056,
00:15:24.633        "percent": 34
00:15:24.633      }
00:15:24.633    },
00:15:24.633    "base_bdevs_list": [
00:15:24.633      {
00:15:24.633        "name": "spare",
00:15:24.633        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:24.633        "is_configured": true,
00:15:24.633        "data_offset": 0,
00:15:24.633        "data_size": 65536
00:15:24.633      },
00:15:24.633      {
00:15:24.633        "name": "BaseBdev2",
00:15:24.633        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:24.633        "is_configured": true,
00:15:24.633        "data_offset": 0,
00:15:24.633        "data_size": 65536
00:15:24.633      },
00:15:24.633      {
00:15:24.633        "name": "BaseBdev3",
00:15:24.633        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:24.633        "is_configured": true,
00:15:24.633        "data_offset": 0,
00:15:24.633        "data_size": 65536
00:15:24.633      }
00:15:24.633    ]
00:15:24.633  }'
00:15:24.633    11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:24.633   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:24.633    11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:24.633   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:24.633   11:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:25.572   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:25.572   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:25.572   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:25.572   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:25.572   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:25.572   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:25.572    11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:25.572    11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:25.572    11:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:25.572    11:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:25.572    11:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:25.572   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:25.572    "name": "raid_bdev1",
00:15:25.572    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:25.572    "strip_size_kb": 64,
00:15:25.572    "state": "online",
00:15:25.572    "raid_level": "raid5f",
00:15:25.572    "superblock": false,
00:15:25.572    "num_base_bdevs": 3,
00:15:25.572    "num_base_bdevs_discovered": 3,
00:15:25.572    "num_base_bdevs_operational": 3,
00:15:25.572    "process": {
00:15:25.572      "type": "rebuild",
00:15:25.572      "target": "spare",
00:15:25.572      "progress": {
00:15:25.572        "blocks": 69632,
00:15:25.572        "percent": 53
00:15:25.572      }
00:15:25.572    },
00:15:25.572    "base_bdevs_list": [
00:15:25.572      {
00:15:25.572        "name": "spare",
00:15:25.572        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:25.572        "is_configured": true,
00:15:25.572        "data_offset": 0,
00:15:25.572        "data_size": 65536
00:15:25.572      },
00:15:25.572      {
00:15:25.572        "name": "BaseBdev2",
00:15:25.572        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:25.572        "is_configured": true,
00:15:25.572        "data_offset": 0,
00:15:25.572        "data_size": 65536
00:15:25.572      },
00:15:25.572      {
00:15:25.572        "name": "BaseBdev3",
00:15:25.572        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:25.572        "is_configured": true,
00:15:25.572        "data_offset": 0,
00:15:25.572        "data_size": 65536
00:15:25.572      }
00:15:25.572    ]
00:15:25.572  }'
00:15:25.572    11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:25.833   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:25.833    11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:25.833   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:25.833   11:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:26.771   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:26.771   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:26.772   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:26.772   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:26.772   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:26.772   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:26.772    11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:26.772    11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:26.772    11:36:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:26.772    11:36:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:26.772    11:36:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:26.772   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:26.772    "name": "raid_bdev1",
00:15:26.772    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:26.772    "strip_size_kb": 64,
00:15:26.772    "state": "online",
00:15:26.772    "raid_level": "raid5f",
00:15:26.772    "superblock": false,
00:15:26.772    "num_base_bdevs": 3,
00:15:26.772    "num_base_bdevs_discovered": 3,
00:15:26.772    "num_base_bdevs_operational": 3,
00:15:26.772    "process": {
00:15:26.772      "type": "rebuild",
00:15:26.772      "target": "spare",
00:15:26.772      "progress": {
00:15:26.772        "blocks": 92160,
00:15:26.772        "percent": 70
00:15:26.772      }
00:15:26.772    },
00:15:26.772    "base_bdevs_list": [
00:15:26.772      {
00:15:26.772        "name": "spare",
00:15:26.772        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:26.772        "is_configured": true,
00:15:26.772        "data_offset": 0,
00:15:26.772        "data_size": 65536
00:15:26.772      },
00:15:26.772      {
00:15:26.772        "name": "BaseBdev2",
00:15:26.772        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:26.772        "is_configured": true,
00:15:26.772        "data_offset": 0,
00:15:26.772        "data_size": 65536
00:15:26.772      },
00:15:26.772      {
00:15:26.772        "name": "BaseBdev3",
00:15:26.772        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:26.772        "is_configured": true,
00:15:26.772        "data_offset": 0,
00:15:26.772        "data_size": 65536
00:15:26.772      }
00:15:26.772    ]
00:15:26.772  }'
00:15:26.772    11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:26.772   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:26.772    11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:27.031   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:27.031   11:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:27.989    11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:27.989    11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:27.989    11:36:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:27.989    11:36:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:27.989    11:36:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:27.989    "name": "raid_bdev1",
00:15:27.989    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:27.989    "strip_size_kb": 64,
00:15:27.989    "state": "online",
00:15:27.989    "raid_level": "raid5f",
00:15:27.989    "superblock": false,
00:15:27.989    "num_base_bdevs": 3,
00:15:27.989    "num_base_bdevs_discovered": 3,
00:15:27.989    "num_base_bdevs_operational": 3,
00:15:27.989    "process": {
00:15:27.989      "type": "rebuild",
00:15:27.989      "target": "spare",
00:15:27.989      "progress": {
00:15:27.989        "blocks": 116736,
00:15:27.989        "percent": 89
00:15:27.989      }
00:15:27.989    },
00:15:27.989    "base_bdevs_list": [
00:15:27.989      {
00:15:27.989        "name": "spare",
00:15:27.989        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:27.989        "is_configured": true,
00:15:27.989        "data_offset": 0,
00:15:27.989        "data_size": 65536
00:15:27.989      },
00:15:27.989      {
00:15:27.989        "name": "BaseBdev2",
00:15:27.989        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:27.989        "is_configured": true,
00:15:27.989        "data_offset": 0,
00:15:27.989        "data_size": 65536
00:15:27.989      },
00:15:27.989      {
00:15:27.989        "name": "BaseBdev3",
00:15:27.989        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:27.989        "is_configured": true,
00:15:27.989        "data_offset": 0,
00:15:27.989        "data_size": 65536
00:15:27.989      }
00:15:27.989    ]
00:15:27.989  }'
00:15:27.989    11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:27.989   11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:27.989    11:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:27.989   11:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:27.989   11:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:28.558  [2024-12-16 11:36:54.569539] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:15:28.558  [2024-12-16 11:36:54.569737] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:15:28.558  [2024-12-16 11:36:54.569842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:29.128   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:29.128   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:29.128   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:29.128   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:29.128   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:29.128   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:29.128    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:29.128    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:29.128    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:29.128    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:29.128    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:29.128   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:29.128    "name": "raid_bdev1",
00:15:29.128    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:29.128    "strip_size_kb": 64,
00:15:29.128    "state": "online",
00:15:29.128    "raid_level": "raid5f",
00:15:29.128    "superblock": false,
00:15:29.128    "num_base_bdevs": 3,
00:15:29.128    "num_base_bdevs_discovered": 3,
00:15:29.128    "num_base_bdevs_operational": 3,
00:15:29.128    "base_bdevs_list": [
00:15:29.128      {
00:15:29.128        "name": "spare",
00:15:29.128        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:29.128        "is_configured": true,
00:15:29.128        "data_offset": 0,
00:15:29.128        "data_size": 65536
00:15:29.128      },
00:15:29.128      {
00:15:29.128        "name": "BaseBdev2",
00:15:29.128        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:29.128        "is_configured": true,
00:15:29.128        "data_offset": 0,
00:15:29.128        "data_size": 65536
00:15:29.128      },
00:15:29.129      {
00:15:29.129        "name": "BaseBdev3",
00:15:29.129        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:29.129        "is_configured": true,
00:15:29.129        "data_offset": 0,
00:15:29.129        "data_size": 65536
00:15:29.129      }
00:15:29.129    ]
00:15:29.129  }'
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:29.129   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:29.129    "name": "raid_bdev1",
00:15:29.129    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:29.129    "strip_size_kb": 64,
00:15:29.129    "state": "online",
00:15:29.129    "raid_level": "raid5f",
00:15:29.129    "superblock": false,
00:15:29.129    "num_base_bdevs": 3,
00:15:29.129    "num_base_bdevs_discovered": 3,
00:15:29.129    "num_base_bdevs_operational": 3,
00:15:29.129    "base_bdevs_list": [
00:15:29.129      {
00:15:29.129        "name": "spare",
00:15:29.129        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:29.129        "is_configured": true,
00:15:29.129        "data_offset": 0,
00:15:29.129        "data_size": 65536
00:15:29.129      },
00:15:29.129      {
00:15:29.129        "name": "BaseBdev2",
00:15:29.129        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:29.129        "is_configured": true,
00:15:29.129        "data_offset": 0,
00:15:29.129        "data_size": 65536
00:15:29.129      },
00:15:29.129      {
00:15:29.129        "name": "BaseBdev3",
00:15:29.129        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:29.129        "is_configured": true,
00:15:29.129        "data_offset": 0,
00:15:29.129        "data_size": 65536
00:15:29.129      }
00:15:29.129    ]
00:15:29.129  }'
00:15:29.129    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:29.389    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:29.389    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:29.389    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:29.389    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:29.389    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:29.389    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:29.389    "name": "raid_bdev1",
00:15:29.389    "uuid": "31df1664-d88c-420e-bba6-da68e4c01d60",
00:15:29.389    "strip_size_kb": 64,
00:15:29.389    "state": "online",
00:15:29.389    "raid_level": "raid5f",
00:15:29.389    "superblock": false,
00:15:29.389    "num_base_bdevs": 3,
00:15:29.389    "num_base_bdevs_discovered": 3,
00:15:29.389    "num_base_bdevs_operational": 3,
00:15:29.389    "base_bdevs_list": [
00:15:29.389      {
00:15:29.389        "name": "spare",
00:15:29.389        "uuid": "74210367-31b2-5e9a-b044-5a00169e575d",
00:15:29.389        "is_configured": true,
00:15:29.389        "data_offset": 0,
00:15:29.389        "data_size": 65536
00:15:29.389      },
00:15:29.389      {
00:15:29.389        "name": "BaseBdev2",
00:15:29.389        "uuid": "36a419fa-ed1a-5e29-866d-86152ed06461",
00:15:29.389        "is_configured": true,
00:15:29.389        "data_offset": 0,
00:15:29.389        "data_size": 65536
00:15:29.389      },
00:15:29.389      {
00:15:29.389        "name": "BaseBdev3",
00:15:29.389        "uuid": "0ad5891f-93f3-5d5e-91b6-096125fcf9e3",
00:15:29.389        "is_configured": true,
00:15:29.389        "data_offset": 0,
00:15:29.389        "data_size": 65536
00:15:29.389      }
00:15:29.389    ]
00:15:29.389  }'
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:29.389   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:29.648   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:15:29.648   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:29.648   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:29.908  [2024-12-16 11:36:55.717148] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:29.908  [2024-12-16 11:36:55.717231] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:29.908  [2024-12-16 11:36:55.717349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:29.908  [2024-12-16 11:36:55.717461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:29.908  [2024-12-16 11:36:55.717509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:29.908    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length
00:15:29.908    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:29.908    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:29.908    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:29.908    11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:15:29.908   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:15:29.908  /dev/nbd0
00:15:30.169    11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:15:30.169   11:36:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:15:30.169   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:15:30.169   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:15:30.169   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:15:30.169   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:15:30.169   11:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:30.169  1+0 records in
00:15:30.169  1+0 records out
00:15:30.169  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556822 s, 7.4 MB/s
00:15:30.169    11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:15:30.169   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:15:30.169  /dev/nbd1
00:15:30.429    11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:30.429  1+0 records in
00:15:30.429  1+0 records out
00:15:30.429  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419007 s, 9.8 MB/s
00:15:30.429    11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:30.429   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:15:30.689    11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:30.689   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:15:30.949    11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']'
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92459
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92459 ']'
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92459
00:15:30.949    11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:15:30.949    11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92459
00:15:30.949  killing process with pid 92459
00:15:30.949  Received shutdown signal, test time was about 60.000000 seconds
00:15:30.949  
00:15:30.949                                                                                                  Latency(us)
00:15:30.949  
[2024-12-16T11:36:57.016Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:30.949  
[2024-12-16T11:36:57.016Z]  ===================================================================================================================
00:15:30.949  
[2024-12-16T11:36:57.016Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92459'
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92459
00:15:30.949  [2024-12-16 11:36:56.821183] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:30.949   11:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92459
00:15:30.949  [2024-12-16 11:36:56.861732] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0
00:15:31.209  
00:15:31.209  real	0m13.573s
00:15:31.209  user	0m17.059s
00:15:31.209  sys	0m1.907s
00:15:31.209  ************************************
00:15:31.209  END TEST raid5f_rebuild_test
00:15:31.209  ************************************
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:15:31.209   11:36:57 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true
00:15:31.209   11:36:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:15:31.209   11:36:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:15:31.209   11:36:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:15:31.209  ************************************
00:15:31.209  START TEST raid5f_rebuild_test_sb
00:15:31.209  ************************************
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:15:31.209    11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']'
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']'
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64'
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92884
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92884
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92884 ']'
00:15:31.209  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:15:31.209   11:36:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:31.209  I/O size of 3145728 is greater than zero copy threshold (65536).
00:15:31.209  Zero copy mechanism will not be used.
00:15:31.209  [2024-12-16 11:36:57.257122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:15:31.209  [2024-12-16 11:36:57.257257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92884 ]
00:15:31.469  [2024-12-16 11:36:57.396410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:31.469  [2024-12-16 11:36:57.441542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:15:31.469  [2024-12-16 11:36:57.484015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:31.469  [2024-12-16 11:36:57.484052] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:32.038   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:15:32.038   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0
00:15:32.038   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:15:32.038   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:15:32.038   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.038   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  BaseBdev1_malloc
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  [2024-12-16 11:36:58.122574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:15:32.298  [2024-12-16 11:36:58.122691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:32.298  [2024-12-16 11:36:58.122745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:15:32.298  [2024-12-16 11:36:58.122799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:32.298  [2024-12-16 11:36:58.125192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:32.298  [2024-12-16 11:36:58.125273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:15:32.298  BaseBdev1
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  BaseBdev2_malloc
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  [2024-12-16 11:36:58.163067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:15:32.298  [2024-12-16 11:36:58.163196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:32.298  [2024-12-16 11:36:58.163269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:15:32.298  [2024-12-16 11:36:58.163295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:32.298  [2024-12-16 11:36:58.166230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:32.298  [2024-12-16 11:36:58.166324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:15:32.298  BaseBdev2
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  BaseBdev3_malloc
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  [2024-12-16 11:36:58.192260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:15:32.298  [2024-12-16 11:36:58.192370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:32.298  [2024-12-16 11:36:58.192420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:15:32.298  [2024-12-16 11:36:58.192462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:32.298  [2024-12-16 11:36:58.194806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:32.298  [2024-12-16 11:36:58.194880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:15:32.298  BaseBdev3
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  spare_malloc
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  spare_delay
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  [2024-12-16 11:36:58.233241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:15:32.298  [2024-12-16 11:36:58.233353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:32.298  [2024-12-16 11:36:58.233400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:15:32.298  [2024-12-16 11:36:58.233412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:32.298  [2024-12-16 11:36:58.235813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:32.298  [2024-12-16 11:36:58.235903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:15:32.298  spare
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.298   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.298  [2024-12-16 11:36:58.245302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:32.299  [2024-12-16 11:36:58.247399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:32.299  [2024-12-16 11:36:58.247523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:32.299  [2024-12-16 11:36:58.247753] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:15:32.299  [2024-12-16 11:36:58.247816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:32.299  [2024-12-16 11:36:58.248164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:15:32.299  [2024-12-16 11:36:58.248695] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:15:32.299  [2024-12-16 11:36:58.248754] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:15:32.299  [2024-12-16 11:36:58.248941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:32.299    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:32.299    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:32.299    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.299    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.299    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:32.299    "name": "raid_bdev1",
00:15:32.299    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:32.299    "strip_size_kb": 64,
00:15:32.299    "state": "online",
00:15:32.299    "raid_level": "raid5f",
00:15:32.299    "superblock": true,
00:15:32.299    "num_base_bdevs": 3,
00:15:32.299    "num_base_bdevs_discovered": 3,
00:15:32.299    "num_base_bdevs_operational": 3,
00:15:32.299    "base_bdevs_list": [
00:15:32.299      {
00:15:32.299        "name": "BaseBdev1",
00:15:32.299        "uuid": "b9f247a7-19e4-5411-96bd-cbb888632e15",
00:15:32.299        "is_configured": true,
00:15:32.299        "data_offset": 2048,
00:15:32.299        "data_size": 63488
00:15:32.299      },
00:15:32.299      {
00:15:32.299        "name": "BaseBdev2",
00:15:32.299        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:32.299        "is_configured": true,
00:15:32.299        "data_offset": 2048,
00:15:32.299        "data_size": 63488
00:15:32.299      },
00:15:32.299      {
00:15:32.299        "name": "BaseBdev3",
00:15:32.299        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:32.299        "is_configured": true,
00:15:32.299        "data_offset": 2048,
00:15:32.299        "data_size": 63488
00:15:32.299      }
00:15:32.299    ]
00:15:32.299  }'
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:32.299   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.868  [2024-12-16 11:36:58.725984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:32.868    11:36:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:15:32.868   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:15:32.869   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:15:32.869   11:36:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:15:33.128  [2024-12-16 11:36:58.977415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:15:33.128  /dev/nbd0
00:15:33.128    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:33.128  1+0 records in
00:15:33.128  1+0 records out
00:15:33.128  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527101 s, 7.8 MB/s
00:15:33.128    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:33.128   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:15:33.129   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:15:33.129   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:33.129   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:15:33.129   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']'
00:15:33.129   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256
00:15:33.129   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128
00:15:33.129   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct
00:15:33.388  496+0 records in
00:15:33.388  496+0 records out
00:15:33.388  65011712 bytes (65 MB, 62 MiB) copied, 0.318814 s, 204 MB/s
00:15:33.388   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:15:33.388   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:15:33.388   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:15:33.388   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:15:33.388   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:15:33.388   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:33.388   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:15:33.648    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:15:33.648  [2024-12-16 11:36:59.594581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:33.648  [2024-12-16 11:36:59.611986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:33.648    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:33.648    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:33.648    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:33.648    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:33.648    11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:33.648    "name": "raid_bdev1",
00:15:33.648    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:33.648    "strip_size_kb": 64,
00:15:33.648    "state": "online",
00:15:33.648    "raid_level": "raid5f",
00:15:33.648    "superblock": true,
00:15:33.648    "num_base_bdevs": 3,
00:15:33.648    "num_base_bdevs_discovered": 2,
00:15:33.648    "num_base_bdevs_operational": 2,
00:15:33.648    "base_bdevs_list": [
00:15:33.648      {
00:15:33.648        "name": null,
00:15:33.648        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:33.648        "is_configured": false,
00:15:33.648        "data_offset": 0,
00:15:33.648        "data_size": 63488
00:15:33.648      },
00:15:33.648      {
00:15:33.648        "name": "BaseBdev2",
00:15:33.648        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:33.648        "is_configured": true,
00:15:33.648        "data_offset": 2048,
00:15:33.648        "data_size": 63488
00:15:33.648      },
00:15:33.648      {
00:15:33.648        "name": "BaseBdev3",
00:15:33.648        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:33.648        "is_configured": true,
00:15:33.648        "data_offset": 2048,
00:15:33.648        "data_size": 63488
00:15:33.648      }
00:15:33.648    ]
00:15:33.648  }'
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:33.648   11:36:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:34.217   11:37:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:15:34.217   11:37:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:34.217   11:37:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:34.217  [2024-12-16 11:37:00.099232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:34.217  [2024-12-16 11:37:00.103392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0
00:15:34.217   11:37:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:34.217   11:37:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1
00:15:34.217  [2024-12-16 11:37:00.105886] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:15:35.156   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:35.156   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:35.156   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:35.156   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:35.156   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:35.156    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:35.156    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:35.156    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:35.156    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:35.156    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:35.157   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:35.157    "name": "raid_bdev1",
00:15:35.157    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:35.157    "strip_size_kb": 64,
00:15:35.157    "state": "online",
00:15:35.157    "raid_level": "raid5f",
00:15:35.157    "superblock": true,
00:15:35.157    "num_base_bdevs": 3,
00:15:35.157    "num_base_bdevs_discovered": 3,
00:15:35.157    "num_base_bdevs_operational": 3,
00:15:35.157    "process": {
00:15:35.157      "type": "rebuild",
00:15:35.157      "target": "spare",
00:15:35.157      "progress": {
00:15:35.157        "blocks": 20480,
00:15:35.157        "percent": 16
00:15:35.157      }
00:15:35.157    },
00:15:35.157    "base_bdevs_list": [
00:15:35.157      {
00:15:35.157        "name": "spare",
00:15:35.157        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:35.157        "is_configured": true,
00:15:35.157        "data_offset": 2048,
00:15:35.157        "data_size": 63488
00:15:35.157      },
00:15:35.157      {
00:15:35.157        "name": "BaseBdev2",
00:15:35.157        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:35.157        "is_configured": true,
00:15:35.157        "data_offset": 2048,
00:15:35.157        "data_size": 63488
00:15:35.157      },
00:15:35.157      {
00:15:35.157        "name": "BaseBdev3",
00:15:35.157        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:35.157        "is_configured": true,
00:15:35.157        "data_offset": 2048,
00:15:35.157        "data_size": 63488
00:15:35.157      }
00:15:35.157    ]
00:15:35.157  }'
00:15:35.157    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:35.157   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:35.157    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:35.416  [2024-12-16 11:37:01.258822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:35.416  [2024-12-16 11:37:01.315976] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:15:35.416  [2024-12-16 11:37:01.316126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:35.416  [2024-12-16 11:37:01.316175] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:35.416  [2024-12-16 11:37:01.316219] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:35.416   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:35.417    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:35.417    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:35.417    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:35.417    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:35.417    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:35.417    "name": "raid_bdev1",
00:15:35.417    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:35.417    "strip_size_kb": 64,
00:15:35.417    "state": "online",
00:15:35.417    "raid_level": "raid5f",
00:15:35.417    "superblock": true,
00:15:35.417    "num_base_bdevs": 3,
00:15:35.417    "num_base_bdevs_discovered": 2,
00:15:35.417    "num_base_bdevs_operational": 2,
00:15:35.417    "base_bdevs_list": [
00:15:35.417      {
00:15:35.417        "name": null,
00:15:35.417        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:35.417        "is_configured": false,
00:15:35.417        "data_offset": 0,
00:15:35.417        "data_size": 63488
00:15:35.417      },
00:15:35.417      {
00:15:35.417        "name": "BaseBdev2",
00:15:35.417        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:35.417        "is_configured": true,
00:15:35.417        "data_offset": 2048,
00:15:35.417        "data_size": 63488
00:15:35.417      },
00:15:35.417      {
00:15:35.417        "name": "BaseBdev3",
00:15:35.417        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:35.417        "is_configured": true,
00:15:35.417        "data_offset": 2048,
00:15:35.417        "data_size": 63488
00:15:35.417      }
00:15:35.417    ]
00:15:35.417  }'
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:35.417   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:36.021   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:36.021   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:36.021   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:36.021   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:36.021   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:36.021    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:36.021    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:36.021    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:36.021    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:36.021    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:36.021   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:36.021    "name": "raid_bdev1",
00:15:36.022    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:36.022    "strip_size_kb": 64,
00:15:36.022    "state": "online",
00:15:36.022    "raid_level": "raid5f",
00:15:36.022    "superblock": true,
00:15:36.022    "num_base_bdevs": 3,
00:15:36.022    "num_base_bdevs_discovered": 2,
00:15:36.022    "num_base_bdevs_operational": 2,
00:15:36.022    "base_bdevs_list": [
00:15:36.022      {
00:15:36.022        "name": null,
00:15:36.022        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:36.022        "is_configured": false,
00:15:36.022        "data_offset": 0,
00:15:36.022        "data_size": 63488
00:15:36.022      },
00:15:36.022      {
00:15:36.022        "name": "BaseBdev2",
00:15:36.022        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:36.022        "is_configured": true,
00:15:36.022        "data_offset": 2048,
00:15:36.022        "data_size": 63488
00:15:36.022      },
00:15:36.022      {
00:15:36.022        "name": "BaseBdev3",
00:15:36.022        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:36.022        "is_configured": true,
00:15:36.022        "data_offset": 2048,
00:15:36.022        "data_size": 63488
00:15:36.022      }
00:15:36.022    ]
00:15:36.022  }'
00:15:36.022    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:36.022   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:36.022    11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:36.022   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:36.022   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:15:36.022   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:36.022   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:36.022  [2024-12-16 11:37:01.909210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:36.022  [2024-12-16 11:37:01.913724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0
00:15:36.022   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:36.022   11:37:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1
00:15:36.022  [2024-12-16 11:37:01.915939] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:15:36.962   11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:36.962   11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:36.962   11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:36.962   11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:36.962   11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:36.962    11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:36.962    11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:36.962    11:37:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:36.962    11:37:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:36.962    11:37:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:36.962   11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:36.962    "name": "raid_bdev1",
00:15:36.962    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:36.962    "strip_size_kb": 64,
00:15:36.962    "state": "online",
00:15:36.962    "raid_level": "raid5f",
00:15:36.962    "superblock": true,
00:15:36.962    "num_base_bdevs": 3,
00:15:36.962    "num_base_bdevs_discovered": 3,
00:15:36.962    "num_base_bdevs_operational": 3,
00:15:36.962    "process": {
00:15:36.962      "type": "rebuild",
00:15:36.962      "target": "spare",
00:15:36.962      "progress": {
00:15:36.962        "blocks": 20480,
00:15:36.962        "percent": 16
00:15:36.962      }
00:15:36.962    },
00:15:36.962    "base_bdevs_list": [
00:15:36.962      {
00:15:36.962        "name": "spare",
00:15:36.962        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:36.962        "is_configured": true,
00:15:36.962        "data_offset": 2048,
00:15:36.962        "data_size": 63488
00:15:36.962      },
00:15:36.962      {
00:15:36.962        "name": "BaseBdev2",
00:15:36.962        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:36.962        "is_configured": true,
00:15:36.962        "data_offset": 2048,
00:15:36.962        "data_size": 63488
00:15:36.962      },
00:15:36.962      {
00:15:36.962        "name": "BaseBdev3",
00:15:36.962        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:36.962        "is_configured": true,
00:15:36.962        "data_offset": 2048,
00:15:36.962        "data_size": 63488
00:15:36.962      }
00:15:36.962    ]
00:15:36.962  }'
00:15:36.962    11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:36.962   11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:36.962    11:37:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:15:37.222  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']'
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=476
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:37.222    11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:37.222    11:37:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:37.222    11:37:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:37.222    11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:37.222    11:37:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:37.222   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:37.222    "name": "raid_bdev1",
00:15:37.222    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:37.222    "strip_size_kb": 64,
00:15:37.222    "state": "online",
00:15:37.222    "raid_level": "raid5f",
00:15:37.222    "superblock": true,
00:15:37.222    "num_base_bdevs": 3,
00:15:37.222    "num_base_bdevs_discovered": 3,
00:15:37.222    "num_base_bdevs_operational": 3,
00:15:37.222    "process": {
00:15:37.222      "type": "rebuild",
00:15:37.222      "target": "spare",
00:15:37.222      "progress": {
00:15:37.222        "blocks": 22528,
00:15:37.222        "percent": 17
00:15:37.222      }
00:15:37.222    },
00:15:37.222    "base_bdevs_list": [
00:15:37.222      {
00:15:37.222        "name": "spare",
00:15:37.222        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:37.222        "is_configured": true,
00:15:37.222        "data_offset": 2048,
00:15:37.222        "data_size": 63488
00:15:37.222      },
00:15:37.222      {
00:15:37.222        "name": "BaseBdev2",
00:15:37.222        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:37.222        "is_configured": true,
00:15:37.222        "data_offset": 2048,
00:15:37.222        "data_size": 63488
00:15:37.222      },
00:15:37.222      {
00:15:37.222        "name": "BaseBdev3",
00:15:37.222        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:37.222        "is_configured": true,
00:15:37.222        "data_offset": 2048,
00:15:37.222        "data_size": 63488
00:15:37.222      }
00:15:37.222    ]
00:15:37.222  }'
00:15:37.222    11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:37.223   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:37.223    11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:37.223   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:37.223   11:37:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:38.161   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:38.161   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:38.161   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:38.161   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:38.161   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:38.161   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:38.161    11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:38.161    11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:38.161    11:37:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:38.161    11:37:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:38.161    11:37:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:38.161   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:38.161    "name": "raid_bdev1",
00:15:38.161    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:38.161    "strip_size_kb": 64,
00:15:38.161    "state": "online",
00:15:38.161    "raid_level": "raid5f",
00:15:38.161    "superblock": true,
00:15:38.161    "num_base_bdevs": 3,
00:15:38.161    "num_base_bdevs_discovered": 3,
00:15:38.161    "num_base_bdevs_operational": 3,
00:15:38.161    "process": {
00:15:38.161      "type": "rebuild",
00:15:38.161      "target": "spare",
00:15:38.161      "progress": {
00:15:38.161        "blocks": 45056,
00:15:38.161        "percent": 35
00:15:38.161      }
00:15:38.161    },
00:15:38.161    "base_bdevs_list": [
00:15:38.161      {
00:15:38.161        "name": "spare",
00:15:38.161        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:38.161        "is_configured": true,
00:15:38.161        "data_offset": 2048,
00:15:38.161        "data_size": 63488
00:15:38.161      },
00:15:38.161      {
00:15:38.161        "name": "BaseBdev2",
00:15:38.161        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:38.161        "is_configured": true,
00:15:38.161        "data_offset": 2048,
00:15:38.161        "data_size": 63488
00:15:38.161      },
00:15:38.161      {
00:15:38.161        "name": "BaseBdev3",
00:15:38.161        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:38.161        "is_configured": true,
00:15:38.161        "data_offset": 2048,
00:15:38.161        "data_size": 63488
00:15:38.161      }
00:15:38.161    ]
00:15:38.161  }'
00:15:38.161    11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:38.421   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:38.421    11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:38.421   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:38.421   11:37:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:39.358    11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:39.358    11:37:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:39.358    11:37:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:39.358    11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:39.358    11:37:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:39.358    "name": "raid_bdev1",
00:15:39.358    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:39.358    "strip_size_kb": 64,
00:15:39.358    "state": "online",
00:15:39.358    "raid_level": "raid5f",
00:15:39.358    "superblock": true,
00:15:39.358    "num_base_bdevs": 3,
00:15:39.358    "num_base_bdevs_discovered": 3,
00:15:39.358    "num_base_bdevs_operational": 3,
00:15:39.358    "process": {
00:15:39.358      "type": "rebuild",
00:15:39.358      "target": "spare",
00:15:39.358      "progress": {
00:15:39.358        "blocks": 67584,
00:15:39.358        "percent": 53
00:15:39.358      }
00:15:39.358    },
00:15:39.358    "base_bdevs_list": [
00:15:39.358      {
00:15:39.358        "name": "spare",
00:15:39.358        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:39.358        "is_configured": true,
00:15:39.358        "data_offset": 2048,
00:15:39.358        "data_size": 63488
00:15:39.358      },
00:15:39.358      {
00:15:39.358        "name": "BaseBdev2",
00:15:39.358        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:39.358        "is_configured": true,
00:15:39.358        "data_offset": 2048,
00:15:39.358        "data_size": 63488
00:15:39.358      },
00:15:39.358      {
00:15:39.358        "name": "BaseBdev3",
00:15:39.358        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:39.358        "is_configured": true,
00:15:39.358        "data_offset": 2048,
00:15:39.358        "data_size": 63488
00:15:39.358      }
00:15:39.358    ]
00:15:39.358  }'
00:15:39.358    11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:39.358   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:39.358    11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:39.617   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:39.617   11:37:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:40.555    11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:40.555    11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:40.555    11:37:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:40.555    11:37:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:40.555    11:37:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:40.555    "name": "raid_bdev1",
00:15:40.555    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:40.555    "strip_size_kb": 64,
00:15:40.555    "state": "online",
00:15:40.555    "raid_level": "raid5f",
00:15:40.555    "superblock": true,
00:15:40.555    "num_base_bdevs": 3,
00:15:40.555    "num_base_bdevs_discovered": 3,
00:15:40.555    "num_base_bdevs_operational": 3,
00:15:40.555    "process": {
00:15:40.555      "type": "rebuild",
00:15:40.555      "target": "spare",
00:15:40.555      "progress": {
00:15:40.555        "blocks": 90112,
00:15:40.555        "percent": 70
00:15:40.555      }
00:15:40.555    },
00:15:40.555    "base_bdevs_list": [
00:15:40.555      {
00:15:40.555        "name": "spare",
00:15:40.555        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:40.555        "is_configured": true,
00:15:40.555        "data_offset": 2048,
00:15:40.555        "data_size": 63488
00:15:40.555      },
00:15:40.555      {
00:15:40.555        "name": "BaseBdev2",
00:15:40.555        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:40.555        "is_configured": true,
00:15:40.555        "data_offset": 2048,
00:15:40.555        "data_size": 63488
00:15:40.555      },
00:15:40.555      {
00:15:40.555        "name": "BaseBdev3",
00:15:40.555        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:40.555        "is_configured": true,
00:15:40.555        "data_offset": 2048,
00:15:40.555        "data_size": 63488
00:15:40.555      }
00:15:40.555    ]
00:15:40.555  }'
00:15:40.555    11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:40.555    11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:40.555   11:37:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:41.936   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:41.936   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:41.937    11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:41.937    11:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:41.937    11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:41.937    11:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:41.937    11:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:41.937    "name": "raid_bdev1",
00:15:41.937    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:41.937    "strip_size_kb": 64,
00:15:41.937    "state": "online",
00:15:41.937    "raid_level": "raid5f",
00:15:41.937    "superblock": true,
00:15:41.937    "num_base_bdevs": 3,
00:15:41.937    "num_base_bdevs_discovered": 3,
00:15:41.937    "num_base_bdevs_operational": 3,
00:15:41.937    "process": {
00:15:41.937      "type": "rebuild",
00:15:41.937      "target": "spare",
00:15:41.937      "progress": {
00:15:41.937        "blocks": 114688,
00:15:41.937        "percent": 90
00:15:41.937      }
00:15:41.937    },
00:15:41.937    "base_bdevs_list": [
00:15:41.937      {
00:15:41.937        "name": "spare",
00:15:41.937        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:41.937        "is_configured": true,
00:15:41.937        "data_offset": 2048,
00:15:41.937        "data_size": 63488
00:15:41.937      },
00:15:41.937      {
00:15:41.937        "name": "BaseBdev2",
00:15:41.937        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:41.937        "is_configured": true,
00:15:41.937        "data_offset": 2048,
00:15:41.937        "data_size": 63488
00:15:41.937      },
00:15:41.937      {
00:15:41.937        "name": "BaseBdev3",
00:15:41.937        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:41.937        "is_configured": true,
00:15:41.937        "data_offset": 2048,
00:15:41.937        "data_size": 63488
00:15:41.937      }
00:15:41.937    ]
00:15:41.937  }'
00:15:41.937    11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:41.937    11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:41.937   11:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:15:42.196  [2024-12-16 11:37:08.165296] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:15:42.196  [2024-12-16 11:37:08.165398] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:15:42.196  [2024-12-16 11:37:08.165599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:42.765    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:42.765    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:42.765    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:42.765    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:42.765    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:42.765    "name": "raid_bdev1",
00:15:42.765    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:42.765    "strip_size_kb": 64,
00:15:42.765    "state": "online",
00:15:42.765    "raid_level": "raid5f",
00:15:42.765    "superblock": true,
00:15:42.765    "num_base_bdevs": 3,
00:15:42.765    "num_base_bdevs_discovered": 3,
00:15:42.765    "num_base_bdevs_operational": 3,
00:15:42.765    "base_bdevs_list": [
00:15:42.765      {
00:15:42.765        "name": "spare",
00:15:42.765        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:42.765        "is_configured": true,
00:15:42.765        "data_offset": 2048,
00:15:42.765        "data_size": 63488
00:15:42.765      },
00:15:42.765      {
00:15:42.765        "name": "BaseBdev2",
00:15:42.765        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:42.765        "is_configured": true,
00:15:42.765        "data_offset": 2048,
00:15:42.765        "data_size": 63488
00:15:42.765      },
00:15:42.765      {
00:15:42.765        "name": "BaseBdev3",
00:15:42.765        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:42.765        "is_configured": true,
00:15:42.765        "data_offset": 2048,
00:15:42.765        "data_size": 63488
00:15:42.765      }
00:15:42.765    ]
00:15:42.765  }'
00:15:42.765    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:42.765   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:15:42.765    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:43.026    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:43.026    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:43.026    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:43.026    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:43.026    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:43.026    "name": "raid_bdev1",
00:15:43.026    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:43.026    "strip_size_kb": 64,
00:15:43.026    "state": "online",
00:15:43.026    "raid_level": "raid5f",
00:15:43.026    "superblock": true,
00:15:43.026    "num_base_bdevs": 3,
00:15:43.026    "num_base_bdevs_discovered": 3,
00:15:43.026    "num_base_bdevs_operational": 3,
00:15:43.026    "base_bdevs_list": [
00:15:43.026      {
00:15:43.026        "name": "spare",
00:15:43.026        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:43.026        "is_configured": true,
00:15:43.026        "data_offset": 2048,
00:15:43.026        "data_size": 63488
00:15:43.026      },
00:15:43.026      {
00:15:43.026        "name": "BaseBdev2",
00:15:43.026        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:43.026        "is_configured": true,
00:15:43.026        "data_offset": 2048,
00:15:43.026        "data_size": 63488
00:15:43.026      },
00:15:43.026      {
00:15:43.026        "name": "BaseBdev3",
00:15:43.026        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:43.026        "is_configured": true,
00:15:43.026        "data_offset": 2048,
00:15:43.026        "data_size": 63488
00:15:43.026      }
00:15:43.026    ]
00:15:43.026  }'
00:15:43.026    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:43.026    11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:43.026   11:37:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:43.026    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:43.026    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:43.026    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:43.026    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:43.026    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:43.026   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:43.026    "name": "raid_bdev1",
00:15:43.026    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:43.026    "strip_size_kb": 64,
00:15:43.026    "state": "online",
00:15:43.026    "raid_level": "raid5f",
00:15:43.026    "superblock": true,
00:15:43.026    "num_base_bdevs": 3,
00:15:43.026    "num_base_bdevs_discovered": 3,
00:15:43.026    "num_base_bdevs_operational": 3,
00:15:43.026    "base_bdevs_list": [
00:15:43.026      {
00:15:43.026        "name": "spare",
00:15:43.026        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:43.027        "is_configured": true,
00:15:43.027        "data_offset": 2048,
00:15:43.027        "data_size": 63488
00:15:43.027      },
00:15:43.027      {
00:15:43.027        "name": "BaseBdev2",
00:15:43.027        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:43.027        "is_configured": true,
00:15:43.027        "data_offset": 2048,
00:15:43.027        "data_size": 63488
00:15:43.027      },
00:15:43.027      {
00:15:43.027        "name": "BaseBdev3",
00:15:43.027        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:43.027        "is_configured": true,
00:15:43.027        "data_offset": 2048,
00:15:43.027        "data_size": 63488
00:15:43.027      }
00:15:43.027    ]
00:15:43.027  }'
00:15:43.027   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:43.027   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:43.597  [2024-12-16 11:37:09.464991] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:43.597  [2024-12-16 11:37:09.465074] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:43.597  [2024-12-16 11:37:09.465228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:43.597  [2024-12-16 11:37:09.465368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:43.597  [2024-12-16 11:37:09.465435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:43.597    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:43.597    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length
00:15:43.597    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:43.597    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:43.597    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:15:43.597   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:15:43.857  /dev/nbd0
00:15:43.857    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:15:43.857   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:43.858  1+0 records in
00:15:43.858  1+0 records out
00:15:43.858  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288712 s, 14.2 MB/s
00:15:43.858    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:15:43.858   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:15:44.118  /dev/nbd1
00:15:44.118    11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:15:44.118   11:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:15:44.118  1+0 records in
00:15:44.118  1+0 records out
00:15:44.118  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038789 s, 10.6 MB/s
00:15:44.118    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:44.118   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:15:44.378    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:15:44.378   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:15:44.638    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:15:44.638   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:15:44.638   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:15:44.638   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:15:44.638   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:44.639  [2024-12-16 11:37:10.617658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:15:44.639  [2024-12-16 11:37:10.617773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:44.639  [2024-12-16 11:37:10.617814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:15:44.639  [2024-12-16 11:37:10.617850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:44.639  [2024-12-16 11:37:10.620104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:44.639  [2024-12-16 11:37:10.620179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:15:44.639  [2024-12-16 11:37:10.620314] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:15:44.639  [2024-12-16 11:37:10.620392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:44.639  [2024-12-16 11:37:10.620540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:44.639  [2024-12-16 11:37:10.620679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:44.639  spare
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:44.639   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:44.899  [2024-12-16 11:37:10.720622] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:15:44.899  [2024-12-16 11:37:10.720710] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:15:44.899  [2024-12-16 11:37:10.721037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560
00:15:44.899  [2024-12-16 11:37:10.721527] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:15:44.899  [2024-12-16 11:37:10.721593] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:15:44.899  [2024-12-16 11:37:10.721801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:44.899    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:44.899    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:44.899    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:44.899    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:44.899    11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:44.899    "name": "raid_bdev1",
00:15:44.899    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:44.899    "strip_size_kb": 64,
00:15:44.899    "state": "online",
00:15:44.899    "raid_level": "raid5f",
00:15:44.899    "superblock": true,
00:15:44.899    "num_base_bdevs": 3,
00:15:44.899    "num_base_bdevs_discovered": 3,
00:15:44.899    "num_base_bdevs_operational": 3,
00:15:44.899    "base_bdevs_list": [
00:15:44.899      {
00:15:44.899        "name": "spare",
00:15:44.899        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:44.899        "is_configured": true,
00:15:44.899        "data_offset": 2048,
00:15:44.899        "data_size": 63488
00:15:44.899      },
00:15:44.899      {
00:15:44.899        "name": "BaseBdev2",
00:15:44.899        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:44.899        "is_configured": true,
00:15:44.899        "data_offset": 2048,
00:15:44.899        "data_size": 63488
00:15:44.899      },
00:15:44.899      {
00:15:44.899        "name": "BaseBdev3",
00:15:44.899        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:44.899        "is_configured": true,
00:15:44.899        "data_offset": 2048,
00:15:44.899        "data_size": 63488
00:15:44.899      }
00:15:44.899    ]
00:15:44.899  }'
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:44.899   11:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:45.159   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:45.159   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:45.159   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:45.159   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:45.159   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:45.159    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:45.159    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:45.159    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:45.159    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:45.159    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:45.419    "name": "raid_bdev1",
00:15:45.419    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:45.419    "strip_size_kb": 64,
00:15:45.419    "state": "online",
00:15:45.419    "raid_level": "raid5f",
00:15:45.419    "superblock": true,
00:15:45.419    "num_base_bdevs": 3,
00:15:45.419    "num_base_bdevs_discovered": 3,
00:15:45.419    "num_base_bdevs_operational": 3,
00:15:45.419    "base_bdevs_list": [
00:15:45.419      {
00:15:45.419        "name": "spare",
00:15:45.419        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:45.419        "is_configured": true,
00:15:45.419        "data_offset": 2048,
00:15:45.419        "data_size": 63488
00:15:45.419      },
00:15:45.419      {
00:15:45.419        "name": "BaseBdev2",
00:15:45.419        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:45.419        "is_configured": true,
00:15:45.419        "data_offset": 2048,
00:15:45.419        "data_size": 63488
00:15:45.419      },
00:15:45.419      {
00:15:45.419        "name": "BaseBdev3",
00:15:45.419        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:45.419        "is_configured": true,
00:15:45.419        "data_offset": 2048,
00:15:45.419        "data_size": 63488
00:15:45.419      }
00:15:45.419    ]
00:15:45.419  }'
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:45.419  [2024-12-16 11:37:11.424656] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:45.419    11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:45.419    "name": "raid_bdev1",
00:15:45.419    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:45.419    "strip_size_kb": 64,
00:15:45.419    "state": "online",
00:15:45.419    "raid_level": "raid5f",
00:15:45.419    "superblock": true,
00:15:45.419    "num_base_bdevs": 3,
00:15:45.419    "num_base_bdevs_discovered": 2,
00:15:45.419    "num_base_bdevs_operational": 2,
00:15:45.419    "base_bdevs_list": [
00:15:45.419      {
00:15:45.419        "name": null,
00:15:45.419        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:45.419        "is_configured": false,
00:15:45.419        "data_offset": 0,
00:15:45.419        "data_size": 63488
00:15:45.419      },
00:15:45.419      {
00:15:45.419        "name": "BaseBdev2",
00:15:45.419        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:45.419        "is_configured": true,
00:15:45.419        "data_offset": 2048,
00:15:45.419        "data_size": 63488
00:15:45.419      },
00:15:45.419      {
00:15:45.419        "name": "BaseBdev3",
00:15:45.419        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:45.419        "is_configured": true,
00:15:45.419        "data_offset": 2048,
00:15:45.419        "data_size": 63488
00:15:45.419      }
00:15:45.419    ]
00:15:45.419  }'
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:45.419   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:45.989   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:15:45.989   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:45.989   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:45.989  [2024-12-16 11:37:11.903799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:45.989  [2024-12-16 11:37:11.904054] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:15:45.989  [2024-12-16 11:37:11.904115] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:15:45.989  [2024-12-16 11:37:11.904188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:45.989  [2024-12-16 11:37:11.907891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630
00:15:45.989   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:45.989   11:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1
00:15:45.989  [2024-12-16 11:37:11.910098] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:15:46.940   11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:46.940   11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:46.940   11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:46.940   11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:46.940   11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:46.940    11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:46.940    11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:46.940    11:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:46.940    11:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:46.940    11:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:46.940   11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:46.940    "name": "raid_bdev1",
00:15:46.940    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:46.940    "strip_size_kb": 64,
00:15:46.940    "state": "online",
00:15:46.940    "raid_level": "raid5f",
00:15:46.940    "superblock": true,
00:15:46.940    "num_base_bdevs": 3,
00:15:46.940    "num_base_bdevs_discovered": 3,
00:15:46.940    "num_base_bdevs_operational": 3,
00:15:46.941    "process": {
00:15:46.941      "type": "rebuild",
00:15:46.941      "target": "spare",
00:15:46.941      "progress": {
00:15:46.941        "blocks": 20480,
00:15:46.941        "percent": 16
00:15:46.941      }
00:15:46.941    },
00:15:46.941    "base_bdevs_list": [
00:15:46.941      {
00:15:46.941        "name": "spare",
00:15:46.941        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:46.941        "is_configured": true,
00:15:46.941        "data_offset": 2048,
00:15:46.941        "data_size": 63488
00:15:46.941      },
00:15:46.941      {
00:15:46.941        "name": "BaseBdev2",
00:15:46.941        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:46.941        "is_configured": true,
00:15:46.941        "data_offset": 2048,
00:15:46.941        "data_size": 63488
00:15:46.941      },
00:15:46.941      {
00:15:46.941        "name": "BaseBdev3",
00:15:46.941        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:46.941        "is_configured": true,
00:15:46.941        "data_offset": 2048,
00:15:46.941        "data_size": 63488
00:15:46.941      }
00:15:46.941    ]
00:15:46.941  }'
00:15:46.941    11:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:47.214    11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:47.214  [2024-12-16 11:37:13.075361] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:47.214  [2024-12-16 11:37:13.119422] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:15:47.214  [2024-12-16 11:37:13.119595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:47.214  [2024-12-16 11:37:13.119650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:47.214  [2024-12-16 11:37:13.119682] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:47.214    11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:47.214    11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:47.214    11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:47.214    11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:47.214    11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:47.214    "name": "raid_bdev1",
00:15:47.214    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:47.214    "strip_size_kb": 64,
00:15:47.214    "state": "online",
00:15:47.214    "raid_level": "raid5f",
00:15:47.214    "superblock": true,
00:15:47.214    "num_base_bdevs": 3,
00:15:47.214    "num_base_bdevs_discovered": 2,
00:15:47.214    "num_base_bdevs_operational": 2,
00:15:47.214    "base_bdevs_list": [
00:15:47.214      {
00:15:47.214        "name": null,
00:15:47.214        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:47.214        "is_configured": false,
00:15:47.214        "data_offset": 0,
00:15:47.214        "data_size": 63488
00:15:47.214      },
00:15:47.214      {
00:15:47.214        "name": "BaseBdev2",
00:15:47.214        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:47.214        "is_configured": true,
00:15:47.214        "data_offset": 2048,
00:15:47.214        "data_size": 63488
00:15:47.214      },
00:15:47.214      {
00:15:47.214        "name": "BaseBdev3",
00:15:47.214        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:47.214        "is_configured": true,
00:15:47.214        "data_offset": 2048,
00:15:47.214        "data_size": 63488
00:15:47.214      }
00:15:47.214    ]
00:15:47.214  }'
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:47.214   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:47.783   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:15:47.783   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:47.783   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:47.783  [2024-12-16 11:37:13.596062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:15:47.783  [2024-12-16 11:37:13.596185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:47.783  [2024-12-16 11:37:13.596229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780
00:15:47.783  [2024-12-16 11:37:13.596257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:47.783  [2024-12-16 11:37:13.596751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:47.783  [2024-12-16 11:37:13.596815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:15:47.783  [2024-12-16 11:37:13.596934] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:15:47.783  [2024-12-16 11:37:13.596976] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:15:47.783  [2024-12-16 11:37:13.597019] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:15:47.783  [2024-12-16 11:37:13.597101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:15:47.783  [2024-12-16 11:37:13.600786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700
00:15:47.783  spare
00:15:47.783   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:47.783   11:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1
00:15:47.783  [2024-12-16 11:37:13.602951] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:48.722    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:48.722    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:48.722    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:48.722    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:48.722    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:48.722    "name": "raid_bdev1",
00:15:48.722    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:48.722    "strip_size_kb": 64,
00:15:48.722    "state": "online",
00:15:48.722    "raid_level": "raid5f",
00:15:48.722    "superblock": true,
00:15:48.722    "num_base_bdevs": 3,
00:15:48.722    "num_base_bdevs_discovered": 3,
00:15:48.722    "num_base_bdevs_operational": 3,
00:15:48.722    "process": {
00:15:48.722      "type": "rebuild",
00:15:48.722      "target": "spare",
00:15:48.722      "progress": {
00:15:48.722        "blocks": 20480,
00:15:48.722        "percent": 16
00:15:48.722      }
00:15:48.722    },
00:15:48.722    "base_bdevs_list": [
00:15:48.722      {
00:15:48.722        "name": "spare",
00:15:48.722        "uuid": "484fb5eb-e278-5833-bfdb-ce0a647659e5",
00:15:48.722        "is_configured": true,
00:15:48.722        "data_offset": 2048,
00:15:48.722        "data_size": 63488
00:15:48.722      },
00:15:48.722      {
00:15:48.722        "name": "BaseBdev2",
00:15:48.722        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:48.722        "is_configured": true,
00:15:48.722        "data_offset": 2048,
00:15:48.722        "data_size": 63488
00:15:48.722      },
00:15:48.722      {
00:15:48.722        "name": "BaseBdev3",
00:15:48.722        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:48.722        "is_configured": true,
00:15:48.722        "data_offset": 2048,
00:15:48.722        "data_size": 63488
00:15:48.722      }
00:15:48.722    ]
00:15:48.722  }'
00:15:48.722    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:15:48.722    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:48.722   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:48.722  [2024-12-16 11:37:14.736039] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:48.981  [2024-12-16 11:37:14.812038] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:15:48.981  [2024-12-16 11:37:14.812181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:48.981  [2024-12-16 11:37:14.812226] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:15:48.981  [2024-12-16 11:37:14.812256] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:48.981    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:48.981    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:48.981    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:48.981    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:48.981    11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:48.981   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:48.981    "name": "raid_bdev1",
00:15:48.981    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:48.981    "strip_size_kb": 64,
00:15:48.981    "state": "online",
00:15:48.981    "raid_level": "raid5f",
00:15:48.981    "superblock": true,
00:15:48.981    "num_base_bdevs": 3,
00:15:48.981    "num_base_bdevs_discovered": 2,
00:15:48.981    "num_base_bdevs_operational": 2,
00:15:48.981    "base_bdevs_list": [
00:15:48.981      {
00:15:48.981        "name": null,
00:15:48.981        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:48.981        "is_configured": false,
00:15:48.981        "data_offset": 0,
00:15:48.981        "data_size": 63488
00:15:48.981      },
00:15:48.981      {
00:15:48.981        "name": "BaseBdev2",
00:15:48.981        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:48.981        "is_configured": true,
00:15:48.981        "data_offset": 2048,
00:15:48.981        "data_size": 63488
00:15:48.981      },
00:15:48.982      {
00:15:48.982        "name": "BaseBdev3",
00:15:48.982        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:48.982        "is_configured": true,
00:15:48.982        "data_offset": 2048,
00:15:48.982        "data_size": 63488
00:15:48.982      }
00:15:48.982    ]
00:15:48.982  }'
00:15:48.982   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:48.982   11:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:49.243   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:49.243   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:49.243   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:49.243   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:49.243   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:49.243    11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:49.243    11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:49.243    11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:49.243    11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:49.243    11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:49.243   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:49.243    "name": "raid_bdev1",
00:15:49.243    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:49.243    "strip_size_kb": 64,
00:15:49.243    "state": "online",
00:15:49.243    "raid_level": "raid5f",
00:15:49.243    "superblock": true,
00:15:49.243    "num_base_bdevs": 3,
00:15:49.243    "num_base_bdevs_discovered": 2,
00:15:49.243    "num_base_bdevs_operational": 2,
00:15:49.243    "base_bdevs_list": [
00:15:49.243      {
00:15:49.243        "name": null,
00:15:49.243        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:49.243        "is_configured": false,
00:15:49.243        "data_offset": 0,
00:15:49.243        "data_size": 63488
00:15:49.243      },
00:15:49.243      {
00:15:49.243        "name": "BaseBdev2",
00:15:49.243        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:49.243        "is_configured": true,
00:15:49.243        "data_offset": 2048,
00:15:49.243        "data_size": 63488
00:15:49.243      },
00:15:49.243      {
00:15:49.243        "name": "BaseBdev3",
00:15:49.243        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:49.243        "is_configured": true,
00:15:49.243        "data_offset": 2048,
00:15:49.243        "data_size": 63488
00:15:49.243      }
00:15:49.243    ]
00:15:49.243  }'
00:15:49.243    11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:49.503    11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:49.503  [2024-12-16 11:37:15.420296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:15:49.503  [2024-12-16 11:37:15.420398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:49.503  [2024-12-16 11:37:15.420454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:15:49.503  [2024-12-16 11:37:15.420494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:49.503  [2024-12-16 11:37:15.420917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:49.503  [2024-12-16 11:37:15.420983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:15:49.503  [2024-12-16 11:37:15.421078] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:15:49.503  [2024-12-16 11:37:15.421122] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:15:49.503  [2024-12-16 11:37:15.421159] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:15:49.503  [2024-12-16 11:37:15.421225] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:15:49.503  BaseBdev1
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:49.503   11:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:50.441    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:50.441    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:50.441    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:50.441    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:50.441    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:50.441    "name": "raid_bdev1",
00:15:50.441    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:50.441    "strip_size_kb": 64,
00:15:50.441    "state": "online",
00:15:50.441    "raid_level": "raid5f",
00:15:50.441    "superblock": true,
00:15:50.441    "num_base_bdevs": 3,
00:15:50.441    "num_base_bdevs_discovered": 2,
00:15:50.441    "num_base_bdevs_operational": 2,
00:15:50.441    "base_bdevs_list": [
00:15:50.441      {
00:15:50.441        "name": null,
00:15:50.441        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:50.441        "is_configured": false,
00:15:50.441        "data_offset": 0,
00:15:50.441        "data_size": 63488
00:15:50.441      },
00:15:50.441      {
00:15:50.441        "name": "BaseBdev2",
00:15:50.441        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:50.441        "is_configured": true,
00:15:50.441        "data_offset": 2048,
00:15:50.441        "data_size": 63488
00:15:50.441      },
00:15:50.441      {
00:15:50.441        "name": "BaseBdev3",
00:15:50.441        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:50.441        "is_configured": true,
00:15:50.441        "data_offset": 2048,
00:15:50.441        "data_size": 63488
00:15:50.441      }
00:15:50.441    ]
00:15:50.441  }'
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:50.441   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:51.007   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:51.008   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:51.008   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:51.008   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:51.008   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:51.008    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:51.008    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.008    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:51.008    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:51.008    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:51.008   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:51.008    "name": "raid_bdev1",
00:15:51.008    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:51.008    "strip_size_kb": 64,
00:15:51.008    "state": "online",
00:15:51.008    "raid_level": "raid5f",
00:15:51.008    "superblock": true,
00:15:51.008    "num_base_bdevs": 3,
00:15:51.008    "num_base_bdevs_discovered": 2,
00:15:51.008    "num_base_bdevs_operational": 2,
00:15:51.008    "base_bdevs_list": [
00:15:51.008      {
00:15:51.008        "name": null,
00:15:51.008        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:51.008        "is_configured": false,
00:15:51.008        "data_offset": 0,
00:15:51.008        "data_size": 63488
00:15:51.008      },
00:15:51.008      {
00:15:51.008        "name": "BaseBdev2",
00:15:51.008        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:51.008        "is_configured": true,
00:15:51.008        "data_offset": 2048,
00:15:51.008        "data_size": 63488
00:15:51.008      },
00:15:51.008      {
00:15:51.008        "name": "BaseBdev3",
00:15:51.008        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:51.008        "is_configured": true,
00:15:51.008        "data_offset": 2048,
00:15:51.008        "data_size": 63488
00:15:51.008      }
00:15:51.008    ]
00:15:51.008  }'
00:15:51.008    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:51.008   11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:51.008    11:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:51.008    11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:51.008  [2024-12-16 11:37:17.021613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:51.008  [2024-12-16 11:37:17.021846] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:15:51.008  [2024-12-16 11:37:17.021913] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:15:51.008  request:
00:15:51.008  {
00:15:51.008  "base_bdev": "BaseBdev1",
00:15:51.008  "raid_bdev": "raid_bdev1",
00:15:51.008  "method": "bdev_raid_add_base_bdev",
00:15:51.008  "req_id": 1
00:15:51.008  }
00:15:51.008  Got JSON-RPC error response
00:15:51.008  response:
00:15:51.008  {
00:15:51.008  "code": -22,
00:15:51.008  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:15:51.008  }
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:15:51.008   11:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:52.385    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:52.385    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:52.385    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:52.385    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:52.385    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:52.385    "name": "raid_bdev1",
00:15:52.385    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:52.385    "strip_size_kb": 64,
00:15:52.385    "state": "online",
00:15:52.385    "raid_level": "raid5f",
00:15:52.385    "superblock": true,
00:15:52.385    "num_base_bdevs": 3,
00:15:52.385    "num_base_bdevs_discovered": 2,
00:15:52.385    "num_base_bdevs_operational": 2,
00:15:52.385    "base_bdevs_list": [
00:15:52.385      {
00:15:52.385        "name": null,
00:15:52.385        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:52.385        "is_configured": false,
00:15:52.385        "data_offset": 0,
00:15:52.385        "data_size": 63488
00:15:52.385      },
00:15:52.385      {
00:15:52.385        "name": "BaseBdev2",
00:15:52.385        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:52.385        "is_configured": true,
00:15:52.385        "data_offset": 2048,
00:15:52.385        "data_size": 63488
00:15:52.385      },
00:15:52.385      {
00:15:52.385        "name": "BaseBdev3",
00:15:52.385        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:52.385        "is_configured": true,
00:15:52.385        "data_offset": 2048,
00:15:52.385        "data_size": 63488
00:15:52.385      }
00:15:52.385    ]
00:15:52.385  }'
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:52.385   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:15:52.644    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:52.644    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:52.644    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:52.644    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:52.644    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:15:52.644    "name": "raid_bdev1",
00:15:52.644    "uuid": "3ce803cb-7e0a-4498-b42a-2b2acaab4eb9",
00:15:52.644    "strip_size_kb": 64,
00:15:52.644    "state": "online",
00:15:52.644    "raid_level": "raid5f",
00:15:52.644    "superblock": true,
00:15:52.644    "num_base_bdevs": 3,
00:15:52.644    "num_base_bdevs_discovered": 2,
00:15:52.644    "num_base_bdevs_operational": 2,
00:15:52.644    "base_bdevs_list": [
00:15:52.644      {
00:15:52.644        "name": null,
00:15:52.644        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:52.644        "is_configured": false,
00:15:52.644        "data_offset": 0,
00:15:52.644        "data_size": 63488
00:15:52.644      },
00:15:52.644      {
00:15:52.644        "name": "BaseBdev2",
00:15:52.644        "uuid": "09f9cf1e-2704-57ae-8edb-b6b236e87300",
00:15:52.644        "is_configured": true,
00:15:52.644        "data_offset": 2048,
00:15:52.644        "data_size": 63488
00:15:52.644      },
00:15:52.644      {
00:15:52.644        "name": "BaseBdev3",
00:15:52.644        "uuid": "631b55d5-72c6-5592-b9fd-b6ae6396fece",
00:15:52.644        "is_configured": true,
00:15:52.644        "data_offset": 2048,
00:15:52.644        "data_size": 63488
00:15:52.644      }
00:15:52.644    ]
00:15:52.644  }'
00:15:52.644    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:15:52.644    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:15:52.644   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92884
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92884 ']'
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92884
00:15:52.645    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:15:52.645    11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92884
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92884'
00:15:52.645  killing process with pid 92884
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92884
00:15:52.645  Received shutdown signal, test time was about 60.000000 seconds
00:15:52.645  
00:15:52.645                                                                                                  Latency(us)
00:15:52.645  
[2024-12-16T11:37:18.712Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:52.645  
[2024-12-16T11:37:18.712Z]  ===================================================================================================================
00:15:52.645  
[2024-12-16T11:37:18.712Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:15:52.645   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92884
00:15:52.645  [2024-12-16 11:37:18.684011] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:52.645  [2024-12-16 11:37:18.684160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:52.645  [2024-12-16 11:37:18.684230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:52.645  [2024-12-16 11:37:18.684241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:15:52.904  [2024-12-16 11:37:18.725698] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:52.904  ************************************
00:15:52.904  END TEST raid5f_rebuild_test_sb
00:15:52.904  ************************************
00:15:52.904   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0
00:15:52.904  
00:15:52.904  real	0m21.789s
00:15:52.904  user	0m28.463s
00:15:52.904  sys	0m2.710s
00:15:52.904   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:15:52.904   11:37:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:15:53.163   11:37:19 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4}
00:15:53.163   11:37:19 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false
00:15:53.163   11:37:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:15:53.163   11:37:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:15:53.163   11:37:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:15:53.163  ************************************
00:15:53.163  START TEST raid5f_state_function_test
00:15:53.163  ************************************
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:15:53.163    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']'
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']'
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg=
00:15:53.163  Process raid pid: 93619
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93619
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93619'
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93619
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93619 ']'
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:53.163  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:15:53.163   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:53.163  [2024-12-16 11:37:19.115247] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:15:53.163  [2024-12-16 11:37:19.115568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:53.422  [2024-12-16 11:37:19.288892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:53.422  [2024-12-16 11:37:19.334639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:15:53.422  [2024-12-16 11:37:19.377328] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:53.422  [2024-12-16 11:37:19.377374] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:53.989  [2024-12-16 11:37:19.959076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:53.989  [2024-12-16 11:37:19.959176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:53.989  [2024-12-16 11:37:19.959225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:53.989  [2024-12-16 11:37:19.959249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:53.989  [2024-12-16 11:37:19.959268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:53.989  [2024-12-16 11:37:19.959300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:53.989  [2024-12-16 11:37:19.959319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:15:53.989  [2024-12-16 11:37:19.959340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:53.989   11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:53.989    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:53.989    11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:53.989    11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:53.989    11:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:53.989    11:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:53.989   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:53.989    "name": "Existed_Raid",
00:15:53.989    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:53.989    "strip_size_kb": 64,
00:15:53.989    "state": "configuring",
00:15:53.989    "raid_level": "raid5f",
00:15:53.989    "superblock": false,
00:15:53.989    "num_base_bdevs": 4,
00:15:53.989    "num_base_bdevs_discovered": 0,
00:15:53.989    "num_base_bdevs_operational": 4,
00:15:53.989    "base_bdevs_list": [
00:15:53.989      {
00:15:53.989        "name": "BaseBdev1",
00:15:53.989        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:53.989        "is_configured": false,
00:15:53.990        "data_offset": 0,
00:15:53.990        "data_size": 0
00:15:53.990      },
00:15:53.990      {
00:15:53.990        "name": "BaseBdev2",
00:15:53.990        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:53.990        "is_configured": false,
00:15:53.990        "data_offset": 0,
00:15:53.990        "data_size": 0
00:15:53.990      },
00:15:53.990      {
00:15:53.990        "name": "BaseBdev3",
00:15:53.990        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:53.990        "is_configured": false,
00:15:53.990        "data_offset": 0,
00:15:53.990        "data_size": 0
00:15:53.990      },
00:15:53.990      {
00:15:53.990        "name": "BaseBdev4",
00:15:53.990        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:53.990        "is_configured": false,
00:15:53.990        "data_offset": 0,
00:15:53.990        "data_size": 0
00:15:53.990      }
00:15:53.990    ]
00:15:53.990  }'
00:15:53.990   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:53.990   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:54.557   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:15:54.557   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:54.557   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:54.557  [2024-12-16 11:37:20.422224] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:54.557  [2024-12-16 11:37:20.422272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:15:54.557   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:54.557   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:54.558  [2024-12-16 11:37:20.434217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:54.558  [2024-12-16 11:37:20.434305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:54.558  [2024-12-16 11:37:20.434334] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:54.558  [2024-12-16 11:37:20.434357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:54.558  [2024-12-16 11:37:20.434376] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:54.558  [2024-12-16 11:37:20.434397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:54.558  [2024-12-16 11:37:20.434414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:15:54.558  [2024-12-16 11:37:20.434435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:54.558  [2024-12-16 11:37:20.455052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:54.558  BaseBdev1
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:54.558  [
00:15:54.558  {
00:15:54.558  "name": "BaseBdev1",
00:15:54.558  "aliases": [
00:15:54.558  "b7a4f099-a047-403c-bc1a-10b02a01b204"
00:15:54.558  ],
00:15:54.558  "product_name": "Malloc disk",
00:15:54.558  "block_size": 512,
00:15:54.558  "num_blocks": 65536,
00:15:54.558  "uuid": "b7a4f099-a047-403c-bc1a-10b02a01b204",
00:15:54.558  "assigned_rate_limits": {
00:15:54.558  "rw_ios_per_sec": 0,
00:15:54.558  "rw_mbytes_per_sec": 0,
00:15:54.558  "r_mbytes_per_sec": 0,
00:15:54.558  "w_mbytes_per_sec": 0
00:15:54.558  },
00:15:54.558  "claimed": true,
00:15:54.558  "claim_type": "exclusive_write",
00:15:54.558  "zoned": false,
00:15:54.558  "supported_io_types": {
00:15:54.558  "read": true,
00:15:54.558  "write": true,
00:15:54.558  "unmap": true,
00:15:54.558  "flush": true,
00:15:54.558  "reset": true,
00:15:54.558  "nvme_admin": false,
00:15:54.558  "nvme_io": false,
00:15:54.558  "nvme_io_md": false,
00:15:54.558  "write_zeroes": true,
00:15:54.558  "zcopy": true,
00:15:54.558  "get_zone_info": false,
00:15:54.558  "zone_management": false,
00:15:54.558  "zone_append": false,
00:15:54.558  "compare": false,
00:15:54.558  "compare_and_write": false,
00:15:54.558  "abort": true,
00:15:54.558  "seek_hole": false,
00:15:54.558  "seek_data": false,
00:15:54.558  "copy": true,
00:15:54.558  "nvme_iov_md": false
00:15:54.558  },
00:15:54.558  "memory_domains": [
00:15:54.558  {
00:15:54.558  "dma_device_id": "system",
00:15:54.558  "dma_device_type": 1
00:15:54.558  },
00:15:54.558  {
00:15:54.558  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:54.558  "dma_device_type": 2
00:15:54.558  }
00:15:54.558  ],
00:15:54.558  "driver_specific": {}
00:15:54.558  }
00:15:54.558  ]
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:54.558    11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:54.558    11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:54.558    11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:54.558    11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:54.558    11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:54.558    "name": "Existed_Raid",
00:15:54.558    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:54.558    "strip_size_kb": 64,
00:15:54.558    "state": "configuring",
00:15:54.558    "raid_level": "raid5f",
00:15:54.558    "superblock": false,
00:15:54.558    "num_base_bdevs": 4,
00:15:54.558    "num_base_bdevs_discovered": 1,
00:15:54.558    "num_base_bdevs_operational": 4,
00:15:54.558    "base_bdevs_list": [
00:15:54.558      {
00:15:54.558        "name": "BaseBdev1",
00:15:54.558        "uuid": "b7a4f099-a047-403c-bc1a-10b02a01b204",
00:15:54.558        "is_configured": true,
00:15:54.558        "data_offset": 0,
00:15:54.558        "data_size": 65536
00:15:54.558      },
00:15:54.558      {
00:15:54.558        "name": "BaseBdev2",
00:15:54.558        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:54.558        "is_configured": false,
00:15:54.558        "data_offset": 0,
00:15:54.558        "data_size": 0
00:15:54.558      },
00:15:54.558      {
00:15:54.558        "name": "BaseBdev3",
00:15:54.558        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:54.558        "is_configured": false,
00:15:54.558        "data_offset": 0,
00:15:54.558        "data_size": 0
00:15:54.558      },
00:15:54.558      {
00:15:54.558        "name": "BaseBdev4",
00:15:54.558        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:54.558        "is_configured": false,
00:15:54.558        "data_offset": 0,
00:15:54.558        "data_size": 0
00:15:54.558      }
00:15:54.558    ]
00:15:54.558  }'
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:54.558   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.126  [2024-12-16 11:37:20.938269] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:55.126  [2024-12-16 11:37:20.938386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.126  [2024-12-16 11:37:20.950290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:55.126  [2024-12-16 11:37:20.952193] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:55.126  [2024-12-16 11:37:20.952237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:55.126  [2024-12-16 11:37:20.952246] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:55.126  [2024-12-16 11:37:20.952255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:55.126  [2024-12-16 11:37:20.952261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:15:55.126  [2024-12-16 11:37:20.952270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:55.126   11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:55.126    11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:55.126    11:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:55.126    11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.126    11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.126    11:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.126   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:55.126    "name": "Existed_Raid",
00:15:55.126    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.126    "strip_size_kb": 64,
00:15:55.126    "state": "configuring",
00:15:55.126    "raid_level": "raid5f",
00:15:55.126    "superblock": false,
00:15:55.126    "num_base_bdevs": 4,
00:15:55.126    "num_base_bdevs_discovered": 1,
00:15:55.126    "num_base_bdevs_operational": 4,
00:15:55.126    "base_bdevs_list": [
00:15:55.126      {
00:15:55.126        "name": "BaseBdev1",
00:15:55.126        "uuid": "b7a4f099-a047-403c-bc1a-10b02a01b204",
00:15:55.126        "is_configured": true,
00:15:55.126        "data_offset": 0,
00:15:55.126        "data_size": 65536
00:15:55.126      },
00:15:55.126      {
00:15:55.126        "name": "BaseBdev2",
00:15:55.126        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.126        "is_configured": false,
00:15:55.126        "data_offset": 0,
00:15:55.126        "data_size": 0
00:15:55.126      },
00:15:55.126      {
00:15:55.126        "name": "BaseBdev3",
00:15:55.126        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.126        "is_configured": false,
00:15:55.126        "data_offset": 0,
00:15:55.126        "data_size": 0
00:15:55.126      },
00:15:55.126      {
00:15:55.126        "name": "BaseBdev4",
00:15:55.126        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.126        "is_configured": false,
00:15:55.126        "data_offset": 0,
00:15:55.126        "data_size": 0
00:15:55.126      }
00:15:55.126    ]
00:15:55.126  }'
00:15:55.126   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:55.126   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.386  BaseBdev2
00:15:55.386  [2024-12-16 11:37:21.358149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.386  [
00:15:55.386  {
00:15:55.386  "name": "BaseBdev2",
00:15:55.386  "aliases": [
00:15:55.386  "795e25b3-ef4c-455e-82d8-7006ebb17f96"
00:15:55.386  ],
00:15:55.386  "product_name": "Malloc disk",
00:15:55.386  "block_size": 512,
00:15:55.386  "num_blocks": 65536,
00:15:55.386  "uuid": "795e25b3-ef4c-455e-82d8-7006ebb17f96",
00:15:55.386  "assigned_rate_limits": {
00:15:55.386  "rw_ios_per_sec": 0,
00:15:55.386  "rw_mbytes_per_sec": 0,
00:15:55.386  "r_mbytes_per_sec": 0,
00:15:55.386  "w_mbytes_per_sec": 0
00:15:55.386  },
00:15:55.386  "claimed": true,
00:15:55.386  "claim_type": "exclusive_write",
00:15:55.386  "zoned": false,
00:15:55.386  "supported_io_types": {
00:15:55.386  "read": true,
00:15:55.386  "write": true,
00:15:55.386  "unmap": true,
00:15:55.386  "flush": true,
00:15:55.386  "reset": true,
00:15:55.386  "nvme_admin": false,
00:15:55.386  "nvme_io": false,
00:15:55.386  "nvme_io_md": false,
00:15:55.386  "write_zeroes": true,
00:15:55.386  "zcopy": true,
00:15:55.386  "get_zone_info": false,
00:15:55.386  "zone_management": false,
00:15:55.386  "zone_append": false,
00:15:55.386  "compare": false,
00:15:55.386  "compare_and_write": false,
00:15:55.386  "abort": true,
00:15:55.386  "seek_hole": false,
00:15:55.386  "seek_data": false,
00:15:55.386  "copy": true,
00:15:55.386  "nvme_iov_md": false
00:15:55.386  },
00:15:55.386  "memory_domains": [
00:15:55.386  {
00:15:55.386  "dma_device_id": "system",
00:15:55.386  "dma_device_type": 1
00:15:55.386  },
00:15:55.386  {
00:15:55.386  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:55.386  "dma_device_type": 2
00:15:55.386  }
00:15:55.386  ],
00:15:55.386  "driver_specific": {}
00:15:55.386  }
00:15:55.386  ]
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:55.386    11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:55.386    11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.386    11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.386    11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:55.386    11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.386   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:55.386    "name": "Existed_Raid",
00:15:55.386    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.386    "strip_size_kb": 64,
00:15:55.386    "state": "configuring",
00:15:55.386    "raid_level": "raid5f",
00:15:55.386    "superblock": false,
00:15:55.386    "num_base_bdevs": 4,
00:15:55.386    "num_base_bdevs_discovered": 2,
00:15:55.386    "num_base_bdevs_operational": 4,
00:15:55.386    "base_bdevs_list": [
00:15:55.386      {
00:15:55.386        "name": "BaseBdev1",
00:15:55.387        "uuid": "b7a4f099-a047-403c-bc1a-10b02a01b204",
00:15:55.387        "is_configured": true,
00:15:55.387        "data_offset": 0,
00:15:55.387        "data_size": 65536
00:15:55.387      },
00:15:55.387      {
00:15:55.387        "name": "BaseBdev2",
00:15:55.387        "uuid": "795e25b3-ef4c-455e-82d8-7006ebb17f96",
00:15:55.387        "is_configured": true,
00:15:55.387        "data_offset": 0,
00:15:55.387        "data_size": 65536
00:15:55.387      },
00:15:55.387      {
00:15:55.387        "name": "BaseBdev3",
00:15:55.387        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.387        "is_configured": false,
00:15:55.387        "data_offset": 0,
00:15:55.387        "data_size": 0
00:15:55.387      },
00:15:55.387      {
00:15:55.387        "name": "BaseBdev4",
00:15:55.387        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.387        "is_configured": false,
00:15:55.387        "data_offset": 0,
00:15:55.387        "data_size": 0
00:15:55.387      }
00:15:55.387    ]
00:15:55.387  }'
00:15:55.387   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:55.387   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.954   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:15:55.954   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.954   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.954  [2024-12-16 11:37:21.816495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:55.954  BaseBdev3
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.955  [
00:15:55.955  {
00:15:55.955  "name": "BaseBdev3",
00:15:55.955  "aliases": [
00:15:55.955  "082f7698-14bb-46a9-9535-12eeda5c2956"
00:15:55.955  ],
00:15:55.955  "product_name": "Malloc disk",
00:15:55.955  "block_size": 512,
00:15:55.955  "num_blocks": 65536,
00:15:55.955  "uuid": "082f7698-14bb-46a9-9535-12eeda5c2956",
00:15:55.955  "assigned_rate_limits": {
00:15:55.955  "rw_ios_per_sec": 0,
00:15:55.955  "rw_mbytes_per_sec": 0,
00:15:55.955  "r_mbytes_per_sec": 0,
00:15:55.955  "w_mbytes_per_sec": 0
00:15:55.955  },
00:15:55.955  "claimed": true,
00:15:55.955  "claim_type": "exclusive_write",
00:15:55.955  "zoned": false,
00:15:55.955  "supported_io_types": {
00:15:55.955  "read": true,
00:15:55.955  "write": true,
00:15:55.955  "unmap": true,
00:15:55.955  "flush": true,
00:15:55.955  "reset": true,
00:15:55.955  "nvme_admin": false,
00:15:55.955  "nvme_io": false,
00:15:55.955  "nvme_io_md": false,
00:15:55.955  "write_zeroes": true,
00:15:55.955  "zcopy": true,
00:15:55.955  "get_zone_info": false,
00:15:55.955  "zone_management": false,
00:15:55.955  "zone_append": false,
00:15:55.955  "compare": false,
00:15:55.955  "compare_and_write": false,
00:15:55.955  "abort": true,
00:15:55.955  "seek_hole": false,
00:15:55.955  "seek_data": false,
00:15:55.955  "copy": true,
00:15:55.955  "nvme_iov_md": false
00:15:55.955  },
00:15:55.955  "memory_domains": [
00:15:55.955  {
00:15:55.955  "dma_device_id": "system",
00:15:55.955  "dma_device_type": 1
00:15:55.955  },
00:15:55.955  {
00:15:55.955  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:55.955  "dma_device_type": 2
00:15:55.955  }
00:15:55.955  ],
00:15:55.955  "driver_specific": {}
00:15:55.955  }
00:15:55.955  ]
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:55.955    11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:55.955    11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:55.955    11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:55.955    11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:55.955    11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:55.955    "name": "Existed_Raid",
00:15:55.955    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.955    "strip_size_kb": 64,
00:15:55.955    "state": "configuring",
00:15:55.955    "raid_level": "raid5f",
00:15:55.955    "superblock": false,
00:15:55.955    "num_base_bdevs": 4,
00:15:55.955    "num_base_bdevs_discovered": 3,
00:15:55.955    "num_base_bdevs_operational": 4,
00:15:55.955    "base_bdevs_list": [
00:15:55.955      {
00:15:55.955        "name": "BaseBdev1",
00:15:55.955        "uuid": "b7a4f099-a047-403c-bc1a-10b02a01b204",
00:15:55.955        "is_configured": true,
00:15:55.955        "data_offset": 0,
00:15:55.955        "data_size": 65536
00:15:55.955      },
00:15:55.955      {
00:15:55.955        "name": "BaseBdev2",
00:15:55.955        "uuid": "795e25b3-ef4c-455e-82d8-7006ebb17f96",
00:15:55.955        "is_configured": true,
00:15:55.955        "data_offset": 0,
00:15:55.955        "data_size": 65536
00:15:55.955      },
00:15:55.955      {
00:15:55.955        "name": "BaseBdev3",
00:15:55.955        "uuid": "082f7698-14bb-46a9-9535-12eeda5c2956",
00:15:55.955        "is_configured": true,
00:15:55.955        "data_offset": 0,
00:15:55.955        "data_size": 65536
00:15:55.955      },
00:15:55.955      {
00:15:55.955        "name": "BaseBdev4",
00:15:55.955        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.955        "is_configured": false,
00:15:55.955        "data_offset": 0,
00:15:55.955        "data_size": 0
00:15:55.955      }
00:15:55.955    ]
00:15:55.955  }'
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:55.955   11:37:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:56.522   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:15:56.522   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:56.522   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:56.522  [2024-12-16 11:37:22.294963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:15:56.522  [2024-12-16 11:37:22.295117] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:15:56.522  [2024-12-16 11:37:22.295147] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:15:56.522  [2024-12-16 11:37:22.295532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:15:56.522  [2024-12-16 11:37:22.296136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:15:56.522  [2024-12-16 11:37:22.296211] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:15:56.522  [2024-12-16 11:37:22.296494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:56.522  BaseBdev4
00:15:56.522   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:56.522   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:15:56.522   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:15:56.522   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:56.523  [
00:15:56.523  {
00:15:56.523  "name": "BaseBdev4",
00:15:56.523  "aliases": [
00:15:56.523  "3285697a-af7b-4f08-80fe-42b47bb456dd"
00:15:56.523  ],
00:15:56.523  "product_name": "Malloc disk",
00:15:56.523  "block_size": 512,
00:15:56.523  "num_blocks": 65536,
00:15:56.523  "uuid": "3285697a-af7b-4f08-80fe-42b47bb456dd",
00:15:56.523  "assigned_rate_limits": {
00:15:56.523  "rw_ios_per_sec": 0,
00:15:56.523  "rw_mbytes_per_sec": 0,
00:15:56.523  "r_mbytes_per_sec": 0,
00:15:56.523  "w_mbytes_per_sec": 0
00:15:56.523  },
00:15:56.523  "claimed": true,
00:15:56.523  "claim_type": "exclusive_write",
00:15:56.523  "zoned": false,
00:15:56.523  "supported_io_types": {
00:15:56.523  "read": true,
00:15:56.523  "write": true,
00:15:56.523  "unmap": true,
00:15:56.523  "flush": true,
00:15:56.523  "reset": true,
00:15:56.523  "nvme_admin": false,
00:15:56.523  "nvme_io": false,
00:15:56.523  "nvme_io_md": false,
00:15:56.523  "write_zeroes": true,
00:15:56.523  "zcopy": true,
00:15:56.523  "get_zone_info": false,
00:15:56.523  "zone_management": false,
00:15:56.523  "zone_append": false,
00:15:56.523  "compare": false,
00:15:56.523  "compare_and_write": false,
00:15:56.523  "abort": true,
00:15:56.523  "seek_hole": false,
00:15:56.523  "seek_data": false,
00:15:56.523  "copy": true,
00:15:56.523  "nvme_iov_md": false
00:15:56.523  },
00:15:56.523  "memory_domains": [
00:15:56.523  {
00:15:56.523  "dma_device_id": "system",
00:15:56.523  "dma_device_type": 1
00:15:56.523  },
00:15:56.523  {
00:15:56.523  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:56.523  "dma_device_type": 2
00:15:56.523  }
00:15:56.523  ],
00:15:56.523  "driver_specific": {}
00:15:56.523  }
00:15:56.523  ]
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:56.523    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:56.523    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:56.523    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:56.523    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:56.523    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:56.523    "name": "Existed_Raid",
00:15:56.523    "uuid": "464a8ee8-6b03-4962-98a4-b376c7f9f561",
00:15:56.523    "strip_size_kb": 64,
00:15:56.523    "state": "online",
00:15:56.523    "raid_level": "raid5f",
00:15:56.523    "superblock": false,
00:15:56.523    "num_base_bdevs": 4,
00:15:56.523    "num_base_bdevs_discovered": 4,
00:15:56.523    "num_base_bdevs_operational": 4,
00:15:56.523    "base_bdevs_list": [
00:15:56.523      {
00:15:56.523        "name": "BaseBdev1",
00:15:56.523        "uuid": "b7a4f099-a047-403c-bc1a-10b02a01b204",
00:15:56.523        "is_configured": true,
00:15:56.523        "data_offset": 0,
00:15:56.523        "data_size": 65536
00:15:56.523      },
00:15:56.523      {
00:15:56.523        "name": "BaseBdev2",
00:15:56.523        "uuid": "795e25b3-ef4c-455e-82d8-7006ebb17f96",
00:15:56.523        "is_configured": true,
00:15:56.523        "data_offset": 0,
00:15:56.523        "data_size": 65536
00:15:56.523      },
00:15:56.523      {
00:15:56.523        "name": "BaseBdev3",
00:15:56.523        "uuid": "082f7698-14bb-46a9-9535-12eeda5c2956",
00:15:56.523        "is_configured": true,
00:15:56.523        "data_offset": 0,
00:15:56.523        "data_size": 65536
00:15:56.523      },
00:15:56.523      {
00:15:56.523        "name": "BaseBdev4",
00:15:56.523        "uuid": "3285697a-af7b-4f08-80fe-42b47bb456dd",
00:15:56.523        "is_configured": true,
00:15:56.523        "data_offset": 0,
00:15:56.523        "data_size": 65536
00:15:56.523      }
00:15:56.523    ]
00:15:56.523  }'
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:56.523   11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:56.787   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:15:56.787   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:15:56.787   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:15:56.787   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:15:56.787   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:15:56.787   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:15:56.787    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:15:56.787    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:56.787    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:56.787    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:15:56.787  [2024-12-16 11:37:22.774471] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:56.787    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:56.787   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:15:56.787    "name": "Existed_Raid",
00:15:56.787    "aliases": [
00:15:56.787      "464a8ee8-6b03-4962-98a4-b376c7f9f561"
00:15:56.787    ],
00:15:56.787    "product_name": "Raid Volume",
00:15:56.787    "block_size": 512,
00:15:56.787    "num_blocks": 196608,
00:15:56.787    "uuid": "464a8ee8-6b03-4962-98a4-b376c7f9f561",
00:15:56.787    "assigned_rate_limits": {
00:15:56.787      "rw_ios_per_sec": 0,
00:15:56.787      "rw_mbytes_per_sec": 0,
00:15:56.787      "r_mbytes_per_sec": 0,
00:15:56.787      "w_mbytes_per_sec": 0
00:15:56.787    },
00:15:56.787    "claimed": false,
00:15:56.787    "zoned": false,
00:15:56.787    "supported_io_types": {
00:15:56.787      "read": true,
00:15:56.787      "write": true,
00:15:56.787      "unmap": false,
00:15:56.787      "flush": false,
00:15:56.787      "reset": true,
00:15:56.787      "nvme_admin": false,
00:15:56.787      "nvme_io": false,
00:15:56.787      "nvme_io_md": false,
00:15:56.787      "write_zeroes": true,
00:15:56.787      "zcopy": false,
00:15:56.787      "get_zone_info": false,
00:15:56.787      "zone_management": false,
00:15:56.787      "zone_append": false,
00:15:56.787      "compare": false,
00:15:56.787      "compare_and_write": false,
00:15:56.787      "abort": false,
00:15:56.787      "seek_hole": false,
00:15:56.787      "seek_data": false,
00:15:56.787      "copy": false,
00:15:56.787      "nvme_iov_md": false
00:15:56.787    },
00:15:56.787    "driver_specific": {
00:15:56.787      "raid": {
00:15:56.787        "uuid": "464a8ee8-6b03-4962-98a4-b376c7f9f561",
00:15:56.787        "strip_size_kb": 64,
00:15:56.787        "state": "online",
00:15:56.787        "raid_level": "raid5f",
00:15:56.787        "superblock": false,
00:15:56.787        "num_base_bdevs": 4,
00:15:56.787        "num_base_bdevs_discovered": 4,
00:15:56.787        "num_base_bdevs_operational": 4,
00:15:56.787        "base_bdevs_list": [
00:15:56.787          {
00:15:56.787            "name": "BaseBdev1",
00:15:56.788            "uuid": "b7a4f099-a047-403c-bc1a-10b02a01b204",
00:15:56.788            "is_configured": true,
00:15:56.788            "data_offset": 0,
00:15:56.788            "data_size": 65536
00:15:56.788          },
00:15:56.788          {
00:15:56.788            "name": "BaseBdev2",
00:15:56.788            "uuid": "795e25b3-ef4c-455e-82d8-7006ebb17f96",
00:15:56.788            "is_configured": true,
00:15:56.788            "data_offset": 0,
00:15:56.788            "data_size": 65536
00:15:56.788          },
00:15:56.788          {
00:15:56.788            "name": "BaseBdev3",
00:15:56.788            "uuid": "082f7698-14bb-46a9-9535-12eeda5c2956",
00:15:56.788            "is_configured": true,
00:15:56.788            "data_offset": 0,
00:15:56.788            "data_size": 65536
00:15:56.788          },
00:15:56.788          {
00:15:56.788            "name": "BaseBdev4",
00:15:56.788            "uuid": "3285697a-af7b-4f08-80fe-42b47bb456dd",
00:15:56.788            "is_configured": true,
00:15:56.788            "data_offset": 0,
00:15:56.788            "data_size": 65536
00:15:56.788          }
00:15:56.788        ]
00:15:56.788      }
00:15:56.788    }
00:15:56.788  }'
00:15:56.788    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:15:56.788   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:15:56.788  BaseBdev2
00:15:56.788  BaseBdev3
00:15:56.788  BaseBdev4'
00:15:56.788    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.056    11:37:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:57.056   11:37:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.056    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.056  [2024-12-16 11:37:23.105691] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:15:57.056   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:57.057   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:57.316    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:57.316    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.316    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:57.316    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.316    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.316   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:57.316    "name": "Existed_Raid",
00:15:57.316    "uuid": "464a8ee8-6b03-4962-98a4-b376c7f9f561",
00:15:57.316    "strip_size_kb": 64,
00:15:57.316    "state": "online",
00:15:57.316    "raid_level": "raid5f",
00:15:57.316    "superblock": false,
00:15:57.316    "num_base_bdevs": 4,
00:15:57.316    "num_base_bdevs_discovered": 3,
00:15:57.316    "num_base_bdevs_operational": 3,
00:15:57.316    "base_bdevs_list": [
00:15:57.316      {
00:15:57.316        "name": null,
00:15:57.316        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:57.316        "is_configured": false,
00:15:57.316        "data_offset": 0,
00:15:57.316        "data_size": 65536
00:15:57.316      },
00:15:57.316      {
00:15:57.316        "name": "BaseBdev2",
00:15:57.316        "uuid": "795e25b3-ef4c-455e-82d8-7006ebb17f96",
00:15:57.316        "is_configured": true,
00:15:57.316        "data_offset": 0,
00:15:57.316        "data_size": 65536
00:15:57.316      },
00:15:57.316      {
00:15:57.316        "name": "BaseBdev3",
00:15:57.316        "uuid": "082f7698-14bb-46a9-9535-12eeda5c2956",
00:15:57.316        "is_configured": true,
00:15:57.316        "data_offset": 0,
00:15:57.316        "data_size": 65536
00:15:57.316      },
00:15:57.316      {
00:15:57.316        "name": "BaseBdev4",
00:15:57.316        "uuid": "3285697a-af7b-4f08-80fe-42b47bb456dd",
00:15:57.316        "is_configured": true,
00:15:57.316        "data_offset": 0,
00:15:57.316        "data_size": 65536
00:15:57.316      }
00:15:57.316    ]
00:15:57.316  }'
00:15:57.316   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:57.316   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.575   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:15:57.575   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:15:57.575    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:57.575    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:15:57.575    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.575    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.575    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835  [2024-12-16 11:37:23.648222] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:57.835  [2024-12-16 11:37:23.648396] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:57.835  [2024-12-16 11:37:23.659563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835  [2024-12-16 11:37:23.719492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835  [2024-12-16 11:37:23.778888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:15:57.835  [2024-12-16 11:37:23.778974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:15:57.835    11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835  BaseBdev2
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:57.835   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:57.835  [
00:15:57.835  {
00:15:57.835  "name": "BaseBdev2",
00:15:57.835  "aliases": [
00:15:57.835  "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd"
00:15:57.835  ],
00:15:57.835  "product_name": "Malloc disk",
00:15:57.835  "block_size": 512,
00:15:57.835  "num_blocks": 65536,
00:15:57.835  "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:15:57.835  "assigned_rate_limits": {
00:15:57.835  "rw_ios_per_sec": 0,
00:15:57.835  "rw_mbytes_per_sec": 0,
00:15:57.835  "r_mbytes_per_sec": 0,
00:15:57.835  "w_mbytes_per_sec": 0
00:15:57.835  },
00:15:57.835  "claimed": false,
00:15:57.835  "zoned": false,
00:15:57.836  "supported_io_types": {
00:15:57.836  "read": true,
00:15:57.836  "write": true,
00:15:57.836  "unmap": true,
00:15:57.836  "flush": true,
00:15:57.836  "reset": true,
00:15:57.836  "nvme_admin": false,
00:15:57.836  "nvme_io": false,
00:15:57.836  "nvme_io_md": false,
00:15:57.836  "write_zeroes": true,
00:15:57.836  "zcopy": true,
00:15:57.836  "get_zone_info": false,
00:15:57.836  "zone_management": false,
00:15:57.836  "zone_append": false,
00:15:57.836  "compare": false,
00:15:57.836  "compare_and_write": false,
00:15:57.836  "abort": true,
00:15:58.095  "seek_hole": false,
00:15:58.095  "seek_data": false,
00:15:58.095  "copy": true,
00:15:58.095  "nvme_iov_md": false
00:15:58.095  },
00:15:58.095  "memory_domains": [
00:15:58.095  {
00:15:58.095  "dma_device_id": "system",
00:15:58.095  "dma_device_type": 1
00:15:58.095  },
00:15:58.095  {
00:15:58.095  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:58.095  "dma_device_type": 2
00:15:58.095  }
00:15:58.095  ],
00:15:58.095  "driver_specific": {}
00:15:58.095  }
00:15:58.095  ]
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.095  BaseBdev3
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.095   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.095  [
00:15:58.095  {
00:15:58.096  "name": "BaseBdev3",
00:15:58.096  "aliases": [
00:15:58.096  "14be11ce-07b2-4ecb-8053-a8cc9affeb97"
00:15:58.096  ],
00:15:58.096  "product_name": "Malloc disk",
00:15:58.096  "block_size": 512,
00:15:58.096  "num_blocks": 65536,
00:15:58.096  "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:15:58.096  "assigned_rate_limits": {
00:15:58.096  "rw_ios_per_sec": 0,
00:15:58.096  "rw_mbytes_per_sec": 0,
00:15:58.096  "r_mbytes_per_sec": 0,
00:15:58.096  "w_mbytes_per_sec": 0
00:15:58.096  },
00:15:58.096  "claimed": false,
00:15:58.096  "zoned": false,
00:15:58.096  "supported_io_types": {
00:15:58.096  "read": true,
00:15:58.096  "write": true,
00:15:58.096  "unmap": true,
00:15:58.096  "flush": true,
00:15:58.096  "reset": true,
00:15:58.096  "nvme_admin": false,
00:15:58.096  "nvme_io": false,
00:15:58.096  "nvme_io_md": false,
00:15:58.096  "write_zeroes": true,
00:15:58.096  "zcopy": true,
00:15:58.096  "get_zone_info": false,
00:15:58.096  "zone_management": false,
00:15:58.096  "zone_append": false,
00:15:58.096  "compare": false,
00:15:58.096  "compare_and_write": false,
00:15:58.096  "abort": true,
00:15:58.096  "seek_hole": false,
00:15:58.096  "seek_data": false,
00:15:58.096  "copy": true,
00:15:58.096  "nvme_iov_md": false
00:15:58.096  },
00:15:58.096  "memory_domains": [
00:15:58.096  {
00:15:58.096  "dma_device_id": "system",
00:15:58.096  "dma_device_type": 1
00:15:58.096  },
00:15:58.096  {
00:15:58.096  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:58.096  "dma_device_type": 2
00:15:58.096  }
00:15:58.096  ],
00:15:58.096  "driver_specific": {}
00:15:58.096  }
00:15:58.096  ]
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.096  BaseBdev4
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.096   11:37:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.096  [
00:15:58.096  {
00:15:58.096  "name": "BaseBdev4",
00:15:58.096  "aliases": [
00:15:58.096  "773cc0fc-3c78-4189-99e2-75fb4c9601bc"
00:15:58.096  ],
00:15:58.096  "product_name": "Malloc disk",
00:15:58.096  "block_size": 512,
00:15:58.096  "num_blocks": 65536,
00:15:58.096  "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:15:58.096  "assigned_rate_limits": {
00:15:58.096  "rw_ios_per_sec": 0,
00:15:58.096  "rw_mbytes_per_sec": 0,
00:15:58.096  "r_mbytes_per_sec": 0,
00:15:58.096  "w_mbytes_per_sec": 0
00:15:58.096  },
00:15:58.096  "claimed": false,
00:15:58.096  "zoned": false,
00:15:58.096  "supported_io_types": {
00:15:58.096  "read": true,
00:15:58.096  "write": true,
00:15:58.096  "unmap": true,
00:15:58.096  "flush": true,
00:15:58.096  "reset": true,
00:15:58.096  "nvme_admin": false,
00:15:58.096  "nvme_io": false,
00:15:58.096  "nvme_io_md": false,
00:15:58.096  "write_zeroes": true,
00:15:58.096  "zcopy": true,
00:15:58.096  "get_zone_info": false,
00:15:58.096  "zone_management": false,
00:15:58.096  "zone_append": false,
00:15:58.096  "compare": false,
00:15:58.096  "compare_and_write": false,
00:15:58.096  "abort": true,
00:15:58.096  "seek_hole": false,
00:15:58.096  "seek_data": false,
00:15:58.096  "copy": true,
00:15:58.096  "nvme_iov_md": false
00:15:58.096  },
00:15:58.096  "memory_domains": [
00:15:58.096  {
00:15:58.096  "dma_device_id": "system",
00:15:58.096  "dma_device_type": 1
00:15:58.096  },
00:15:58.096  {
00:15:58.096  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:58.096  "dma_device_type": 2
00:15:58.096  }
00:15:58.096  ],
00:15:58.096  "driver_specific": {}
00:15:58.096  }
00:15:58.096  ]
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.096  [2024-12-16 11:37:24.008642] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:58.096  [2024-12-16 11:37:24.008730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:58.096  [2024-12-16 11:37:24.008781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:58.096  [2024-12-16 11:37:24.010842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:58.096  [2024-12-16 11:37:24.010931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:58.096    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:58.096    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:58.096    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.096    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.096    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:58.096    "name": "Existed_Raid",
00:15:58.096    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.096    "strip_size_kb": 64,
00:15:58.096    "state": "configuring",
00:15:58.096    "raid_level": "raid5f",
00:15:58.096    "superblock": false,
00:15:58.096    "num_base_bdevs": 4,
00:15:58.096    "num_base_bdevs_discovered": 3,
00:15:58.096    "num_base_bdevs_operational": 4,
00:15:58.096    "base_bdevs_list": [
00:15:58.096      {
00:15:58.096        "name": "BaseBdev1",
00:15:58.096        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.096        "is_configured": false,
00:15:58.096        "data_offset": 0,
00:15:58.096        "data_size": 0
00:15:58.096      },
00:15:58.096      {
00:15:58.096        "name": "BaseBdev2",
00:15:58.096        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:15:58.096        "is_configured": true,
00:15:58.096        "data_offset": 0,
00:15:58.096        "data_size": 65536
00:15:58.096      },
00:15:58.096      {
00:15:58.096        "name": "BaseBdev3",
00:15:58.096        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:15:58.096        "is_configured": true,
00:15:58.096        "data_offset": 0,
00:15:58.096        "data_size": 65536
00:15:58.096      },
00:15:58.096      {
00:15:58.096        "name": "BaseBdev4",
00:15:58.096        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:15:58.096        "is_configured": true,
00:15:58.096        "data_offset": 0,
00:15:58.096        "data_size": 65536
00:15:58.096      }
00:15:58.096    ]
00:15:58.096  }'
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:58.096   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.665  [2024-12-16 11:37:24.483832] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:58.665    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:58.665    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.665    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:58.665    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.665    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:58.665    "name": "Existed_Raid",
00:15:58.665    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.665    "strip_size_kb": 64,
00:15:58.665    "state": "configuring",
00:15:58.665    "raid_level": "raid5f",
00:15:58.665    "superblock": false,
00:15:58.665    "num_base_bdevs": 4,
00:15:58.665    "num_base_bdevs_discovered": 2,
00:15:58.665    "num_base_bdevs_operational": 4,
00:15:58.665    "base_bdevs_list": [
00:15:58.665      {
00:15:58.665        "name": "BaseBdev1",
00:15:58.665        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.665        "is_configured": false,
00:15:58.665        "data_offset": 0,
00:15:58.665        "data_size": 0
00:15:58.665      },
00:15:58.665      {
00:15:58.665        "name": null,
00:15:58.665        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:15:58.665        "is_configured": false,
00:15:58.665        "data_offset": 0,
00:15:58.665        "data_size": 65536
00:15:58.665      },
00:15:58.665      {
00:15:58.665        "name": "BaseBdev3",
00:15:58.665        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:15:58.665        "is_configured": true,
00:15:58.665        "data_offset": 0,
00:15:58.665        "data_size": 65536
00:15:58.665      },
00:15:58.665      {
00:15:58.665        "name": "BaseBdev4",
00:15:58.665        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:15:58.665        "is_configured": true,
00:15:58.665        "data_offset": 0,
00:15:58.665        "data_size": 65536
00:15:58.665      }
00:15:58.665    ]
00:15:58.665  }'
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:58.665   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.926  [2024-12-16 11:37:24.922427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:58.926  BaseBdev1
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.926  [
00:15:58.926  {
00:15:58.926  "name": "BaseBdev1",
00:15:58.926  "aliases": [
00:15:58.926  "8caf5b3a-584b-412f-b824-f61da541eaff"
00:15:58.926  ],
00:15:58.926  "product_name": "Malloc disk",
00:15:58.926  "block_size": 512,
00:15:58.926  "num_blocks": 65536,
00:15:58.926  "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:15:58.926  "assigned_rate_limits": {
00:15:58.926  "rw_ios_per_sec": 0,
00:15:58.926  "rw_mbytes_per_sec": 0,
00:15:58.926  "r_mbytes_per_sec": 0,
00:15:58.926  "w_mbytes_per_sec": 0
00:15:58.926  },
00:15:58.926  "claimed": true,
00:15:58.926  "claim_type": "exclusive_write",
00:15:58.926  "zoned": false,
00:15:58.926  "supported_io_types": {
00:15:58.926  "read": true,
00:15:58.926  "write": true,
00:15:58.926  "unmap": true,
00:15:58.926  "flush": true,
00:15:58.926  "reset": true,
00:15:58.926  "nvme_admin": false,
00:15:58.926  "nvme_io": false,
00:15:58.926  "nvme_io_md": false,
00:15:58.926  "write_zeroes": true,
00:15:58.926  "zcopy": true,
00:15:58.926  "get_zone_info": false,
00:15:58.926  "zone_management": false,
00:15:58.926  "zone_append": false,
00:15:58.926  "compare": false,
00:15:58.926  "compare_and_write": false,
00:15:58.926  "abort": true,
00:15:58.926  "seek_hole": false,
00:15:58.926  "seek_data": false,
00:15:58.926  "copy": true,
00:15:58.926  "nvme_iov_md": false
00:15:58.926  },
00:15:58.926  "memory_domains": [
00:15:58.926  {
00:15:58.926  "dma_device_id": "system",
00:15:58.926  "dma_device_type": 1
00:15:58.926  },
00:15:58.926  {
00:15:58.926  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:58.926  "dma_device_type": 2
00:15:58.926  }
00:15:58.926  ],
00:15:58.926  "driver_specific": {}
00:15:58.926  }
00:15:58.926  ]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:58.926    11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:58.926   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:58.926    "name": "Existed_Raid",
00:15:58.926    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.926    "strip_size_kb": 64,
00:15:58.926    "state": "configuring",
00:15:58.926    "raid_level": "raid5f",
00:15:58.926    "superblock": false,
00:15:58.926    "num_base_bdevs": 4,
00:15:58.926    "num_base_bdevs_discovered": 3,
00:15:58.926    "num_base_bdevs_operational": 4,
00:15:58.926    "base_bdevs_list": [
00:15:58.926      {
00:15:58.927        "name": "BaseBdev1",
00:15:58.927        "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:15:58.927        "is_configured": true,
00:15:58.927        "data_offset": 0,
00:15:58.927        "data_size": 65536
00:15:58.927      },
00:15:58.927      {
00:15:58.927        "name": null,
00:15:58.927        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:15:58.927        "is_configured": false,
00:15:58.927        "data_offset": 0,
00:15:58.927        "data_size": 65536
00:15:58.927      },
00:15:58.927      {
00:15:58.927        "name": "BaseBdev3",
00:15:58.927        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:15:58.927        "is_configured": true,
00:15:58.927        "data_offset": 0,
00:15:58.927        "data_size": 65536
00:15:58.927      },
00:15:58.927      {
00:15:58.927        "name": "BaseBdev4",
00:15:58.927        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:15:58.927        "is_configured": true,
00:15:58.927        "data_offset": 0,
00:15:58.927        "data_size": 65536
00:15:58.927      }
00:15:58.927    ]
00:15:58.927  }'
00:15:58.927   11:37:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:58.927   11:37:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:59.495  [2024-12-16 11:37:25.473575] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:15:59.495    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:15:59.495    "name": "Existed_Raid",
00:15:59.495    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:59.495    "strip_size_kb": 64,
00:15:59.495    "state": "configuring",
00:15:59.495    "raid_level": "raid5f",
00:15:59.495    "superblock": false,
00:15:59.495    "num_base_bdevs": 4,
00:15:59.495    "num_base_bdevs_discovered": 2,
00:15:59.495    "num_base_bdevs_operational": 4,
00:15:59.495    "base_bdevs_list": [
00:15:59.495      {
00:15:59.495        "name": "BaseBdev1",
00:15:59.495        "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:15:59.495        "is_configured": true,
00:15:59.495        "data_offset": 0,
00:15:59.495        "data_size": 65536
00:15:59.495      },
00:15:59.495      {
00:15:59.495        "name": null,
00:15:59.495        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:15:59.495        "is_configured": false,
00:15:59.495        "data_offset": 0,
00:15:59.495        "data_size": 65536
00:15:59.495      },
00:15:59.495      {
00:15:59.495        "name": null,
00:15:59.495        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:15:59.495        "is_configured": false,
00:15:59.495        "data_offset": 0,
00:15:59.495        "data_size": 65536
00:15:59.495      },
00:15:59.495      {
00:15:59.495        "name": "BaseBdev4",
00:15:59.495        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:15:59.495        "is_configured": true,
00:15:59.495        "data_offset": 0,
00:15:59.495        "data_size": 65536
00:15:59.495      }
00:15:59.495    ]
00:15:59.495  }'
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:15:59.495   11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.064  [2024-12-16 11:37:25.976756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:00.064   11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.064    11:37:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:00.064    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.064   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:00.064    "name": "Existed_Raid",
00:16:00.064    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:00.064    "strip_size_kb": 64,
00:16:00.064    "state": "configuring",
00:16:00.064    "raid_level": "raid5f",
00:16:00.064    "superblock": false,
00:16:00.064    "num_base_bdevs": 4,
00:16:00.064    "num_base_bdevs_discovered": 3,
00:16:00.064    "num_base_bdevs_operational": 4,
00:16:00.064    "base_bdevs_list": [
00:16:00.064      {
00:16:00.064        "name": "BaseBdev1",
00:16:00.064        "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:16:00.064        "is_configured": true,
00:16:00.064        "data_offset": 0,
00:16:00.064        "data_size": 65536
00:16:00.064      },
00:16:00.064      {
00:16:00.064        "name": null,
00:16:00.064        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:16:00.064        "is_configured": false,
00:16:00.064        "data_offset": 0,
00:16:00.064        "data_size": 65536
00:16:00.064      },
00:16:00.064      {
00:16:00.064        "name": "BaseBdev3",
00:16:00.064        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:16:00.064        "is_configured": true,
00:16:00.064        "data_offset": 0,
00:16:00.064        "data_size": 65536
00:16:00.064      },
00:16:00.064      {
00:16:00.064        "name": "BaseBdev4",
00:16:00.064        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:16:00.064        "is_configured": true,
00:16:00.064        "data_offset": 0,
00:16:00.064        "data_size": 65536
00:16:00.064      }
00:16:00.064    ]
00:16:00.064  }'
00:16:00.064   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:00.064   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.634  [2024-12-16 11:37:26.475975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:00.634    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:00.634    "name": "Existed_Raid",
00:16:00.634    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:00.634    "strip_size_kb": 64,
00:16:00.634    "state": "configuring",
00:16:00.634    "raid_level": "raid5f",
00:16:00.634    "superblock": false,
00:16:00.634    "num_base_bdevs": 4,
00:16:00.634    "num_base_bdevs_discovered": 2,
00:16:00.634    "num_base_bdevs_operational": 4,
00:16:00.634    "base_bdevs_list": [
00:16:00.634      {
00:16:00.634        "name": null,
00:16:00.634        "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:16:00.634        "is_configured": false,
00:16:00.634        "data_offset": 0,
00:16:00.634        "data_size": 65536
00:16:00.634      },
00:16:00.634      {
00:16:00.634        "name": null,
00:16:00.634        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:16:00.634        "is_configured": false,
00:16:00.634        "data_offset": 0,
00:16:00.634        "data_size": 65536
00:16:00.634      },
00:16:00.634      {
00:16:00.634        "name": "BaseBdev3",
00:16:00.634        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:16:00.634        "is_configured": true,
00:16:00.634        "data_offset": 0,
00:16:00.634        "data_size": 65536
00:16:00.634      },
00:16:00.634      {
00:16:00.634        "name": "BaseBdev4",
00:16:00.634        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:16:00.634        "is_configured": true,
00:16:00.634        "data_offset": 0,
00:16:00.634        "data_size": 65536
00:16:00.634      }
00:16:00.634    ]
00:16:00.634  }'
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:00.634   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.894    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:16:00.894    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:00.894    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.894    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.894    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:00.894  [2024-12-16 11:37:26.949989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:00.894   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:01.154    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:01.154    11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:01.154    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:01.154    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.154    11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:01.154   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:01.154    "name": "Existed_Raid",
00:16:01.154    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:01.154    "strip_size_kb": 64,
00:16:01.154    "state": "configuring",
00:16:01.154    "raid_level": "raid5f",
00:16:01.154    "superblock": false,
00:16:01.154    "num_base_bdevs": 4,
00:16:01.154    "num_base_bdevs_discovered": 3,
00:16:01.154    "num_base_bdevs_operational": 4,
00:16:01.154    "base_bdevs_list": [
00:16:01.154      {
00:16:01.154        "name": null,
00:16:01.154        "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:16:01.154        "is_configured": false,
00:16:01.154        "data_offset": 0,
00:16:01.154        "data_size": 65536
00:16:01.154      },
00:16:01.154      {
00:16:01.154        "name": "BaseBdev2",
00:16:01.154        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:16:01.154        "is_configured": true,
00:16:01.154        "data_offset": 0,
00:16:01.154        "data_size": 65536
00:16:01.154      },
00:16:01.154      {
00:16:01.154        "name": "BaseBdev3",
00:16:01.154        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:16:01.154        "is_configured": true,
00:16:01.154        "data_offset": 0,
00:16:01.155        "data_size": 65536
00:16:01.155      },
00:16:01.155      {
00:16:01.155        "name": "BaseBdev4",
00:16:01.155        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:16:01.155        "is_configured": true,
00:16:01.155        "data_offset": 0,
00:16:01.155        "data_size": 65536
00:16:01.155      }
00:16:01.155    ]
00:16:01.155  }'
00:16:01.155   11:37:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:01.155   11:37:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:01.415   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:01.415    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.674    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:01.674   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8caf5b3a-584b-412f-b824-f61da541eaff
00:16:01.674   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:01.674   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.674  [2024-12-16 11:37:27.528231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:16:01.675  [2024-12-16 11:37:27.528351] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:16:01.675  [2024-12-16 11:37:27.528380] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:16:01.675  [2024-12-16 11:37:27.528694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:16:01.675  [2024-12-16 11:37:27.529232] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:16:01.675  [2024-12-16 11:37:27.529293] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:16:01.675  [2024-12-16 11:37:27.529529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:01.675  NewBaseBdev
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.675  [
00:16:01.675  {
00:16:01.675  "name": "NewBaseBdev",
00:16:01.675  "aliases": [
00:16:01.675  "8caf5b3a-584b-412f-b824-f61da541eaff"
00:16:01.675  ],
00:16:01.675  "product_name": "Malloc disk",
00:16:01.675  "block_size": 512,
00:16:01.675  "num_blocks": 65536,
00:16:01.675  "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:16:01.675  "assigned_rate_limits": {
00:16:01.675  "rw_ios_per_sec": 0,
00:16:01.675  "rw_mbytes_per_sec": 0,
00:16:01.675  "r_mbytes_per_sec": 0,
00:16:01.675  "w_mbytes_per_sec": 0
00:16:01.675  },
00:16:01.675  "claimed": true,
00:16:01.675  "claim_type": "exclusive_write",
00:16:01.675  "zoned": false,
00:16:01.675  "supported_io_types": {
00:16:01.675  "read": true,
00:16:01.675  "write": true,
00:16:01.675  "unmap": true,
00:16:01.675  "flush": true,
00:16:01.675  "reset": true,
00:16:01.675  "nvme_admin": false,
00:16:01.675  "nvme_io": false,
00:16:01.675  "nvme_io_md": false,
00:16:01.675  "write_zeroes": true,
00:16:01.675  "zcopy": true,
00:16:01.675  "get_zone_info": false,
00:16:01.675  "zone_management": false,
00:16:01.675  "zone_append": false,
00:16:01.675  "compare": false,
00:16:01.675  "compare_and_write": false,
00:16:01.675  "abort": true,
00:16:01.675  "seek_hole": false,
00:16:01.675  "seek_data": false,
00:16:01.675  "copy": true,
00:16:01.675  "nvme_iov_md": false
00:16:01.675  },
00:16:01.675  "memory_domains": [
00:16:01.675  {
00:16:01.675  "dma_device_id": "system",
00:16:01.675  "dma_device_type": 1
00:16:01.675  },
00:16:01.675  {
00:16:01.675  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:01.675  "dma_device_type": 2
00:16:01.675  }
00:16:01.675  ],
00:16:01.675  "driver_specific": {}
00:16:01.675  }
00:16:01.675  ]
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:01.675    11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:01.675    11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:01.675    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:01.675    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:01.675    11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:01.675    "name": "Existed_Raid",
00:16:01.675    "uuid": "11112d61-0247-4660-b835-f6c7d0386e79",
00:16:01.675    "strip_size_kb": 64,
00:16:01.675    "state": "online",
00:16:01.675    "raid_level": "raid5f",
00:16:01.675    "superblock": false,
00:16:01.675    "num_base_bdevs": 4,
00:16:01.675    "num_base_bdevs_discovered": 4,
00:16:01.675    "num_base_bdevs_operational": 4,
00:16:01.675    "base_bdevs_list": [
00:16:01.675      {
00:16:01.675        "name": "NewBaseBdev",
00:16:01.675        "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:16:01.675        "is_configured": true,
00:16:01.675        "data_offset": 0,
00:16:01.675        "data_size": 65536
00:16:01.675      },
00:16:01.675      {
00:16:01.675        "name": "BaseBdev2",
00:16:01.675        "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:16:01.675        "is_configured": true,
00:16:01.675        "data_offset": 0,
00:16:01.675        "data_size": 65536
00:16:01.675      },
00:16:01.675      {
00:16:01.675        "name": "BaseBdev3",
00:16:01.675        "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:16:01.675        "is_configured": true,
00:16:01.675        "data_offset": 0,
00:16:01.675        "data_size": 65536
00:16:01.675      },
00:16:01.675      {
00:16:01.675        "name": "BaseBdev4",
00:16:01.675        "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:16:01.675        "is_configured": true,
00:16:01.675        "data_offset": 0,
00:16:01.675        "data_size": 65536
00:16:01.675      }
00:16:01.675    ]
00:16:01.675  }'
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:01.675   11:37:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:02.245   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:16:02.245   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:16:02.245   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:16:02.245   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:16:02.245   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name
00:16:02.245   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:16:02.245    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:16:02.245    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:16:02.245    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:02.245    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:02.245  [2024-12-16 11:37:28.031932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:02.245    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:02.245   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:16:02.245    "name": "Existed_Raid",
00:16:02.245    "aliases": [
00:16:02.245      "11112d61-0247-4660-b835-f6c7d0386e79"
00:16:02.245    ],
00:16:02.245    "product_name": "Raid Volume",
00:16:02.245    "block_size": 512,
00:16:02.245    "num_blocks": 196608,
00:16:02.245    "uuid": "11112d61-0247-4660-b835-f6c7d0386e79",
00:16:02.245    "assigned_rate_limits": {
00:16:02.245      "rw_ios_per_sec": 0,
00:16:02.245      "rw_mbytes_per_sec": 0,
00:16:02.245      "r_mbytes_per_sec": 0,
00:16:02.245      "w_mbytes_per_sec": 0
00:16:02.245    },
00:16:02.245    "claimed": false,
00:16:02.245    "zoned": false,
00:16:02.245    "supported_io_types": {
00:16:02.245      "read": true,
00:16:02.245      "write": true,
00:16:02.245      "unmap": false,
00:16:02.245      "flush": false,
00:16:02.245      "reset": true,
00:16:02.245      "nvme_admin": false,
00:16:02.245      "nvme_io": false,
00:16:02.245      "nvme_io_md": false,
00:16:02.245      "write_zeroes": true,
00:16:02.245      "zcopy": false,
00:16:02.245      "get_zone_info": false,
00:16:02.245      "zone_management": false,
00:16:02.245      "zone_append": false,
00:16:02.245      "compare": false,
00:16:02.245      "compare_and_write": false,
00:16:02.245      "abort": false,
00:16:02.245      "seek_hole": false,
00:16:02.245      "seek_data": false,
00:16:02.245      "copy": false,
00:16:02.245      "nvme_iov_md": false
00:16:02.245    },
00:16:02.245    "driver_specific": {
00:16:02.245      "raid": {
00:16:02.245        "uuid": "11112d61-0247-4660-b835-f6c7d0386e79",
00:16:02.245        "strip_size_kb": 64,
00:16:02.245        "state": "online",
00:16:02.245        "raid_level": "raid5f",
00:16:02.245        "superblock": false,
00:16:02.245        "num_base_bdevs": 4,
00:16:02.245        "num_base_bdevs_discovered": 4,
00:16:02.245        "num_base_bdevs_operational": 4,
00:16:02.245        "base_bdevs_list": [
00:16:02.245          {
00:16:02.245            "name": "NewBaseBdev",
00:16:02.245            "uuid": "8caf5b3a-584b-412f-b824-f61da541eaff",
00:16:02.245            "is_configured": true,
00:16:02.245            "data_offset": 0,
00:16:02.245            "data_size": 65536
00:16:02.245          },
00:16:02.245          {
00:16:02.245            "name": "BaseBdev2",
00:16:02.245            "uuid": "476e64b9-c0e3-48bb-b3f4-9d92d4df3efd",
00:16:02.245            "is_configured": true,
00:16:02.245            "data_offset": 0,
00:16:02.245            "data_size": 65536
00:16:02.245          },
00:16:02.245          {
00:16:02.245            "name": "BaseBdev3",
00:16:02.245            "uuid": "14be11ce-07b2-4ecb-8053-a8cc9affeb97",
00:16:02.245            "is_configured": true,
00:16:02.245            "data_offset": 0,
00:16:02.245            "data_size": 65536
00:16:02.245          },
00:16:02.245          {
00:16:02.245            "name": "BaseBdev4",
00:16:02.245            "uuid": "773cc0fc-3c78-4189-99e2-75fb4c9601bc",
00:16:02.245            "is_configured": true,
00:16:02.245            "data_offset": 0,
00:16:02.245            "data_size": 65536
00:16:02.245          }
00:16:02.245        ]
00:16:02.245      }
00:16:02.246    }
00:16:02.246  }'
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:16:02.246  BaseBdev2
00:16:02.246  BaseBdev3
00:16:02.246  BaseBdev4'
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:02.246   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:02.246    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:02.506  [2024-12-16 11:37:28.339411] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:02.506  [2024-12-16 11:37:28.339565] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:02.506  [2024-12-16 11:37:28.339718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:02.506  [2024-12-16 11:37:28.340087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:02.506  [2024-12-16 11:37:28.340163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93619
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93619 ']'
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93619
00:16:02.506    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:16:02.506    11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93619
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:16:02.506  killing process with pid 93619
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93619'
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93619
00:16:02.506  [2024-12-16 11:37:28.384361] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:02.506   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93619
00:16:02.506  [2024-12-16 11:37:28.467458] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0
00:16:03.072  
00:16:03.072  real	0m9.828s
00:16:03.072  user	0m16.673s
00:16:03.072  sys	0m2.057s
00:16:03.072  ************************************
00:16:03.072  END TEST raid5f_state_function_test
00:16:03.072  ************************************
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x
00:16:03.072   11:37:28 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true
00:16:03.072   11:37:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:16:03.072   11:37:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:16:03.072   11:37:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:16:03.072  ************************************
00:16:03.072  START TEST raid5f_state_function_test_sb
00:16:03.072  ************************************
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:16:03.072   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:16:03.072    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:16:03.072    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:16:03.073    11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']'
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64'
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94270
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94270'
00:16:03.073  Process raid pid: 94270
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94270
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94270 ']'
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:03.073  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:16:03.073   11:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:03.073  [2024-12-16 11:37:29.012313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:16:03.073  [2024-12-16 11:37:29.012548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:03.332  [2024-12-16 11:37:29.159847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:03.332  [2024-12-16 11:37:29.243214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:16:03.332  [2024-12-16 11:37:29.324366] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:03.332  [2024-12-16 11:37:29.324577] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:03.899   11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:16:03.899   11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0
00:16:03.899   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:16:03.899   11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:03.899   11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:03.899  [2024-12-16 11:37:29.878590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:03.899  [2024-12-16 11:37:29.878745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:03.899  [2024-12-16 11:37:29.878786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:03.899  [2024-12-16 11:37:29.878812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:03.899  [2024-12-16 11:37:29.878874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:03.899  [2024-12-16 11:37:29.878902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:03.899  [2024-12-16 11:37:29.878974] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:03.899  [2024-12-16 11:37:29.879006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:03.899   11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:03.900    11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:03.900    11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:03.900    11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:03.900    11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:03.900    11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:03.900    "name": "Existed_Raid",
00:16:03.900    "uuid": "64cd9454-2d83-4e8c-8262-2b732081a824",
00:16:03.900    "strip_size_kb": 64,
00:16:03.900    "state": "configuring",
00:16:03.900    "raid_level": "raid5f",
00:16:03.900    "superblock": true,
00:16:03.900    "num_base_bdevs": 4,
00:16:03.900    "num_base_bdevs_discovered": 0,
00:16:03.900    "num_base_bdevs_operational": 4,
00:16:03.900    "base_bdevs_list": [
00:16:03.900      {
00:16:03.900        "name": "BaseBdev1",
00:16:03.900        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:03.900        "is_configured": false,
00:16:03.900        "data_offset": 0,
00:16:03.900        "data_size": 0
00:16:03.900      },
00:16:03.900      {
00:16:03.900        "name": "BaseBdev2",
00:16:03.900        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:03.900        "is_configured": false,
00:16:03.900        "data_offset": 0,
00:16:03.900        "data_size": 0
00:16:03.900      },
00:16:03.900      {
00:16:03.900        "name": "BaseBdev3",
00:16:03.900        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:03.900        "is_configured": false,
00:16:03.900        "data_offset": 0,
00:16:03.900        "data_size": 0
00:16:03.900      },
00:16:03.900      {
00:16:03.900        "name": "BaseBdev4",
00:16:03.900        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:03.900        "is_configured": false,
00:16:03.900        "data_offset": 0,
00:16:03.900        "data_size": 0
00:16:03.900      }
00:16:03.900    ]
00:16:03.900  }'
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:03.900   11:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.466  [2024-12-16 11:37:30.297790] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:04.466  [2024-12-16 11:37:30.297934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.466  [2024-12-16 11:37:30.309817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:04.466  [2024-12-16 11:37:30.309910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:04.466  [2024-12-16 11:37:30.309938] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:04.466  [2024-12-16 11:37:30.309962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:04.466  [2024-12-16 11:37:30.309981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:04.466  [2024-12-16 11:37:30.310004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:04.466  [2024-12-16 11:37:30.310022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:04.466  [2024-12-16 11:37:30.310045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.466  [2024-12-16 11:37:30.338128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:04.466  BaseBdev1
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.466   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.467  [
00:16:04.467  {
00:16:04.467  "name": "BaseBdev1",
00:16:04.467  "aliases": [
00:16:04.467  "68687d83-aedb-4a7c-a632-2e167996b01a"
00:16:04.467  ],
00:16:04.467  "product_name": "Malloc disk",
00:16:04.467  "block_size": 512,
00:16:04.467  "num_blocks": 65536,
00:16:04.467  "uuid": "68687d83-aedb-4a7c-a632-2e167996b01a",
00:16:04.467  "assigned_rate_limits": {
00:16:04.467  "rw_ios_per_sec": 0,
00:16:04.467  "rw_mbytes_per_sec": 0,
00:16:04.467  "r_mbytes_per_sec": 0,
00:16:04.467  "w_mbytes_per_sec": 0
00:16:04.467  },
00:16:04.467  "claimed": true,
00:16:04.467  "claim_type": "exclusive_write",
00:16:04.467  "zoned": false,
00:16:04.467  "supported_io_types": {
00:16:04.467  "read": true,
00:16:04.467  "write": true,
00:16:04.467  "unmap": true,
00:16:04.467  "flush": true,
00:16:04.467  "reset": true,
00:16:04.467  "nvme_admin": false,
00:16:04.467  "nvme_io": false,
00:16:04.467  "nvme_io_md": false,
00:16:04.467  "write_zeroes": true,
00:16:04.467  "zcopy": true,
00:16:04.467  "get_zone_info": false,
00:16:04.467  "zone_management": false,
00:16:04.467  "zone_append": false,
00:16:04.467  "compare": false,
00:16:04.467  "compare_and_write": false,
00:16:04.467  "abort": true,
00:16:04.467  "seek_hole": false,
00:16:04.467  "seek_data": false,
00:16:04.467  "copy": true,
00:16:04.467  "nvme_iov_md": false
00:16:04.467  },
00:16:04.467  "memory_domains": [
00:16:04.467  {
00:16:04.467  "dma_device_id": "system",
00:16:04.467  "dma_device_type": 1
00:16:04.467  },
00:16:04.467  {
00:16:04.467  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:04.467  "dma_device_type": 2
00:16:04.467  }
00:16:04.467  ],
00:16:04.467  "driver_specific": {}
00:16:04.467  }
00:16:04.467  ]
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:04.467    11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:04.467    11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:04.467    11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.467    11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.467    11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:04.467    "name": "Existed_Raid",
00:16:04.467    "uuid": "e9f1930a-c28b-4c3d-a8bc-82e118acb2c5",
00:16:04.467    "strip_size_kb": 64,
00:16:04.467    "state": "configuring",
00:16:04.467    "raid_level": "raid5f",
00:16:04.467    "superblock": true,
00:16:04.467    "num_base_bdevs": 4,
00:16:04.467    "num_base_bdevs_discovered": 1,
00:16:04.467    "num_base_bdevs_operational": 4,
00:16:04.467    "base_bdevs_list": [
00:16:04.467      {
00:16:04.467        "name": "BaseBdev1",
00:16:04.467        "uuid": "68687d83-aedb-4a7c-a632-2e167996b01a",
00:16:04.467        "is_configured": true,
00:16:04.467        "data_offset": 2048,
00:16:04.467        "data_size": 63488
00:16:04.467      },
00:16:04.467      {
00:16:04.467        "name": "BaseBdev2",
00:16:04.467        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:04.467        "is_configured": false,
00:16:04.467        "data_offset": 0,
00:16:04.467        "data_size": 0
00:16:04.467      },
00:16:04.467      {
00:16:04.467        "name": "BaseBdev3",
00:16:04.467        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:04.467        "is_configured": false,
00:16:04.467        "data_offset": 0,
00:16:04.467        "data_size": 0
00:16:04.467      },
00:16:04.467      {
00:16:04.467        "name": "BaseBdev4",
00:16:04.467        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:04.467        "is_configured": false,
00:16:04.467        "data_offset": 0,
00:16:04.467        "data_size": 0
00:16:04.467      }
00:16:04.467    ]
00:16:04.467  }'
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:04.467   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.726   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:16:04.726   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.726   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.985  [2024-12-16 11:37:30.797749] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:04.985  [2024-12-16 11:37:30.797912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.985  [2024-12-16 11:37:30.809787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:04.985  [2024-12-16 11:37:30.812324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:04.985  [2024-12-16 11:37:30.812416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:04.985  [2024-12-16 11:37:30.812450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:04.985  [2024-12-16 11:37:30.812475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:04.985  [2024-12-16 11:37:30.812496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:04.985  [2024-12-16 11:37:30.812518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:04.985    11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:04.985    11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:04.985    11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:04.985    11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:04.985    11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:04.985    "name": "Existed_Raid",
00:16:04.985    "uuid": "36eea015-cc0d-439b-addd-0d1e492446cd",
00:16:04.985    "strip_size_kb": 64,
00:16:04.985    "state": "configuring",
00:16:04.985    "raid_level": "raid5f",
00:16:04.985    "superblock": true,
00:16:04.985    "num_base_bdevs": 4,
00:16:04.985    "num_base_bdevs_discovered": 1,
00:16:04.985    "num_base_bdevs_operational": 4,
00:16:04.985    "base_bdevs_list": [
00:16:04.985      {
00:16:04.985        "name": "BaseBdev1",
00:16:04.985        "uuid": "68687d83-aedb-4a7c-a632-2e167996b01a",
00:16:04.985        "is_configured": true,
00:16:04.985        "data_offset": 2048,
00:16:04.985        "data_size": 63488
00:16:04.985      },
00:16:04.985      {
00:16:04.985        "name": "BaseBdev2",
00:16:04.985        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:04.985        "is_configured": false,
00:16:04.985        "data_offset": 0,
00:16:04.985        "data_size": 0
00:16:04.985      },
00:16:04.985      {
00:16:04.985        "name": "BaseBdev3",
00:16:04.985        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:04.985        "is_configured": false,
00:16:04.985        "data_offset": 0,
00:16:04.985        "data_size": 0
00:16:04.985      },
00:16:04.985      {
00:16:04.985        "name": "BaseBdev4",
00:16:04.985        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:04.985        "is_configured": false,
00:16:04.985        "data_offset": 0,
00:16:04.985        "data_size": 0
00:16:04.985      }
00:16:04.985    ]
00:16:04.985  }'
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:04.985   11:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.249  [2024-12-16 11:37:31.268980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:05.249  BaseBdev2
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.249   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.249  [
00:16:05.249  {
00:16:05.249  "name": "BaseBdev2",
00:16:05.249  "aliases": [
00:16:05.249  "2647d423-ff9d-4500-9380-27701d38cc15"
00:16:05.249  ],
00:16:05.249  "product_name": "Malloc disk",
00:16:05.249  "block_size": 512,
00:16:05.249  "num_blocks": 65536,
00:16:05.249  "uuid": "2647d423-ff9d-4500-9380-27701d38cc15",
00:16:05.249  "assigned_rate_limits": {
00:16:05.249  "rw_ios_per_sec": 0,
00:16:05.250  "rw_mbytes_per_sec": 0,
00:16:05.250  "r_mbytes_per_sec": 0,
00:16:05.250  "w_mbytes_per_sec": 0
00:16:05.250  },
00:16:05.250  "claimed": true,
00:16:05.250  "claim_type": "exclusive_write",
00:16:05.250  "zoned": false,
00:16:05.250  "supported_io_types": {
00:16:05.250  "read": true,
00:16:05.250  "write": true,
00:16:05.250  "unmap": true,
00:16:05.250  "flush": true,
00:16:05.250  "reset": true,
00:16:05.250  "nvme_admin": false,
00:16:05.250  "nvme_io": false,
00:16:05.250  "nvme_io_md": false,
00:16:05.250  "write_zeroes": true,
00:16:05.250  "zcopy": true,
00:16:05.250  "get_zone_info": false,
00:16:05.250  "zone_management": false,
00:16:05.250  "zone_append": false,
00:16:05.250  "compare": false,
00:16:05.250  "compare_and_write": false,
00:16:05.250  "abort": true,
00:16:05.250  "seek_hole": false,
00:16:05.250  "seek_data": false,
00:16:05.250  "copy": true,
00:16:05.250  "nvme_iov_md": false
00:16:05.250  },
00:16:05.250  "memory_domains": [
00:16:05.250  {
00:16:05.250  "dma_device_id": "system",
00:16:05.250  "dma_device_type": 1
00:16:05.250  },
00:16:05.250  {
00:16:05.250  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:05.250  "dma_device_type": 2
00:16:05.250  }
00:16:05.250  ],
00:16:05.250  "driver_specific": {}
00:16:05.250  }
00:16:05.250  ]
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:05.250   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:05.524    11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:05.524    11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:05.524    11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.524    11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.524    11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:05.524   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:05.524    "name": "Existed_Raid",
00:16:05.524    "uuid": "36eea015-cc0d-439b-addd-0d1e492446cd",
00:16:05.524    "strip_size_kb": 64,
00:16:05.524    "state": "configuring",
00:16:05.524    "raid_level": "raid5f",
00:16:05.524    "superblock": true,
00:16:05.524    "num_base_bdevs": 4,
00:16:05.524    "num_base_bdevs_discovered": 2,
00:16:05.524    "num_base_bdevs_operational": 4,
00:16:05.524    "base_bdevs_list": [
00:16:05.524      {
00:16:05.524        "name": "BaseBdev1",
00:16:05.524        "uuid": "68687d83-aedb-4a7c-a632-2e167996b01a",
00:16:05.524        "is_configured": true,
00:16:05.524        "data_offset": 2048,
00:16:05.524        "data_size": 63488
00:16:05.524      },
00:16:05.524      {
00:16:05.524        "name": "BaseBdev2",
00:16:05.524        "uuid": "2647d423-ff9d-4500-9380-27701d38cc15",
00:16:05.524        "is_configured": true,
00:16:05.524        "data_offset": 2048,
00:16:05.524        "data_size": 63488
00:16:05.524      },
00:16:05.524      {
00:16:05.524        "name": "BaseBdev3",
00:16:05.524        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:05.524        "is_configured": false,
00:16:05.524        "data_offset": 0,
00:16:05.524        "data_size": 0
00:16:05.524      },
00:16:05.524      {
00:16:05.524        "name": "BaseBdev4",
00:16:05.524        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:05.524        "is_configured": false,
00:16:05.524        "data_offset": 0,
00:16:05.524        "data_size": 0
00:16:05.524      }
00:16:05.524    ]
00:16:05.524  }'
00:16:05.524   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:05.524   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.784  [2024-12-16 11:37:31.761604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:05.784  BaseBdev3
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.784   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.784  [
00:16:05.784  {
00:16:05.784  "name": "BaseBdev3",
00:16:05.784  "aliases": [
00:16:05.784  "d4824ed3-2c85-4183-afaa-96474f1c50b6"
00:16:05.784  ],
00:16:05.784  "product_name": "Malloc disk",
00:16:05.784  "block_size": 512,
00:16:05.784  "num_blocks": 65536,
00:16:05.784  "uuid": "d4824ed3-2c85-4183-afaa-96474f1c50b6",
00:16:05.784  "assigned_rate_limits": {
00:16:05.784  "rw_ios_per_sec": 0,
00:16:05.784  "rw_mbytes_per_sec": 0,
00:16:05.784  "r_mbytes_per_sec": 0,
00:16:05.784  "w_mbytes_per_sec": 0
00:16:05.784  },
00:16:05.784  "claimed": true,
00:16:05.784  "claim_type": "exclusive_write",
00:16:05.784  "zoned": false,
00:16:05.784  "supported_io_types": {
00:16:05.784  "read": true,
00:16:05.784  "write": true,
00:16:05.784  "unmap": true,
00:16:05.784  "flush": true,
00:16:05.784  "reset": true,
00:16:05.784  "nvme_admin": false,
00:16:05.784  "nvme_io": false,
00:16:05.784  "nvme_io_md": false,
00:16:05.784  "write_zeroes": true,
00:16:05.784  "zcopy": true,
00:16:05.784  "get_zone_info": false,
00:16:05.784  "zone_management": false,
00:16:05.784  "zone_append": false,
00:16:05.784  "compare": false,
00:16:05.784  "compare_and_write": false,
00:16:05.784  "abort": true,
00:16:05.785  "seek_hole": false,
00:16:05.785  "seek_data": false,
00:16:05.785  "copy": true,
00:16:05.785  "nvme_iov_md": false
00:16:05.785  },
00:16:05.785  "memory_domains": [
00:16:05.785  {
00:16:05.785  "dma_device_id": "system",
00:16:05.785  "dma_device_type": 1
00:16:05.785  },
00:16:05.785  {
00:16:05.785  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:05.785  "dma_device_type": 2
00:16:05.785  }
00:16:05.785  ],
00:16:05.785  "driver_specific": {}
00:16:05.785  }
00:16:05.785  ]
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:05.785   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:05.785    11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:05.785    11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:05.785    11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:05.785    11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:05.785    11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:06.045   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:06.045    "name": "Existed_Raid",
00:16:06.045    "uuid": "36eea015-cc0d-439b-addd-0d1e492446cd",
00:16:06.045    "strip_size_kb": 64,
00:16:06.045    "state": "configuring",
00:16:06.045    "raid_level": "raid5f",
00:16:06.045    "superblock": true,
00:16:06.045    "num_base_bdevs": 4,
00:16:06.045    "num_base_bdevs_discovered": 3,
00:16:06.045    "num_base_bdevs_operational": 4,
00:16:06.045    "base_bdevs_list": [
00:16:06.045      {
00:16:06.045        "name": "BaseBdev1",
00:16:06.045        "uuid": "68687d83-aedb-4a7c-a632-2e167996b01a",
00:16:06.045        "is_configured": true,
00:16:06.045        "data_offset": 2048,
00:16:06.045        "data_size": 63488
00:16:06.045      },
00:16:06.045      {
00:16:06.045        "name": "BaseBdev2",
00:16:06.045        "uuid": "2647d423-ff9d-4500-9380-27701d38cc15",
00:16:06.045        "is_configured": true,
00:16:06.045        "data_offset": 2048,
00:16:06.045        "data_size": 63488
00:16:06.045      },
00:16:06.045      {
00:16:06.045        "name": "BaseBdev3",
00:16:06.045        "uuid": "d4824ed3-2c85-4183-afaa-96474f1c50b6",
00:16:06.045        "is_configured": true,
00:16:06.045        "data_offset": 2048,
00:16:06.045        "data_size": 63488
00:16:06.045      },
00:16:06.045      {
00:16:06.045        "name": "BaseBdev4",
00:16:06.045        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:06.045        "is_configured": false,
00:16:06.045        "data_offset": 0,
00:16:06.045        "data_size": 0
00:16:06.045      }
00:16:06.045    ]
00:16:06.045  }'
00:16:06.045   11:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:06.045   11:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.305  [2024-12-16 11:37:32.226124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:16:06.305  [2024-12-16 11:37:32.226507] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:16:06.305  [2024-12-16 11:37:32.226594] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:06.305  [2024-12-16 11:37:32.226987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:16:06.305  BaseBdev4
00:16:06.305  [2024-12-16 11:37:32.227649] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:16:06.305  [2024-12-16 11:37:32.227677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:16:06.305  [2024-12-16 11:37:32.227830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.305  [
00:16:06.305  {
00:16:06.305  "name": "BaseBdev4",
00:16:06.305  "aliases": [
00:16:06.305  "f6e347c7-9ad8-4cd5-94bf-e5f93a0e4f2d"
00:16:06.305  ],
00:16:06.305  "product_name": "Malloc disk",
00:16:06.305  "block_size": 512,
00:16:06.305  "num_blocks": 65536,
00:16:06.305  "uuid": "f6e347c7-9ad8-4cd5-94bf-e5f93a0e4f2d",
00:16:06.305  "assigned_rate_limits": {
00:16:06.305  "rw_ios_per_sec": 0,
00:16:06.305  "rw_mbytes_per_sec": 0,
00:16:06.305  "r_mbytes_per_sec": 0,
00:16:06.305  "w_mbytes_per_sec": 0
00:16:06.305  },
00:16:06.305  "claimed": true,
00:16:06.305  "claim_type": "exclusive_write",
00:16:06.305  "zoned": false,
00:16:06.305  "supported_io_types": {
00:16:06.305  "read": true,
00:16:06.305  "write": true,
00:16:06.305  "unmap": true,
00:16:06.305  "flush": true,
00:16:06.305  "reset": true,
00:16:06.305  "nvme_admin": false,
00:16:06.305  "nvme_io": false,
00:16:06.305  "nvme_io_md": false,
00:16:06.305  "write_zeroes": true,
00:16:06.305  "zcopy": true,
00:16:06.305  "get_zone_info": false,
00:16:06.305  "zone_management": false,
00:16:06.305  "zone_append": false,
00:16:06.305  "compare": false,
00:16:06.305  "compare_and_write": false,
00:16:06.305  "abort": true,
00:16:06.305  "seek_hole": false,
00:16:06.305  "seek_data": false,
00:16:06.305  "copy": true,
00:16:06.305  "nvme_iov_md": false
00:16:06.305  },
00:16:06.305  "memory_domains": [
00:16:06.305  {
00:16:06.305  "dma_device_id": "system",
00:16:06.305  "dma_device_type": 1
00:16:06.305  },
00:16:06.305  {
00:16:06.305  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:06.305  "dma_device_type": 2
00:16:06.305  }
00:16:06.305  ],
00:16:06.305  "driver_specific": {}
00:16:06.305  }
00:16:06.305  ]
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:06.305   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:06.306   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:06.306   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:06.306   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:06.306    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:06.306    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:06.306    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:06.306    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.306    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:06.306   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:06.306    "name": "Existed_Raid",
00:16:06.306    "uuid": "36eea015-cc0d-439b-addd-0d1e492446cd",
00:16:06.306    "strip_size_kb": 64,
00:16:06.306    "state": "online",
00:16:06.306    "raid_level": "raid5f",
00:16:06.306    "superblock": true,
00:16:06.306    "num_base_bdevs": 4,
00:16:06.306    "num_base_bdevs_discovered": 4,
00:16:06.306    "num_base_bdevs_operational": 4,
00:16:06.306    "base_bdevs_list": [
00:16:06.306      {
00:16:06.306        "name": "BaseBdev1",
00:16:06.306        "uuid": "68687d83-aedb-4a7c-a632-2e167996b01a",
00:16:06.306        "is_configured": true,
00:16:06.306        "data_offset": 2048,
00:16:06.306        "data_size": 63488
00:16:06.306      },
00:16:06.306      {
00:16:06.306        "name": "BaseBdev2",
00:16:06.306        "uuid": "2647d423-ff9d-4500-9380-27701d38cc15",
00:16:06.306        "is_configured": true,
00:16:06.306        "data_offset": 2048,
00:16:06.306        "data_size": 63488
00:16:06.306      },
00:16:06.306      {
00:16:06.306        "name": "BaseBdev3",
00:16:06.306        "uuid": "d4824ed3-2c85-4183-afaa-96474f1c50b6",
00:16:06.306        "is_configured": true,
00:16:06.306        "data_offset": 2048,
00:16:06.306        "data_size": 63488
00:16:06.306      },
00:16:06.306      {
00:16:06.306        "name": "BaseBdev4",
00:16:06.306        "uuid": "f6e347c7-9ad8-4cd5-94bf-e5f93a0e4f2d",
00:16:06.306        "is_configured": true,
00:16:06.306        "data_offset": 2048,
00:16:06.306        "data_size": 63488
00:16:06.306      }
00:16:06.306    ]
00:16:06.306  }'
00:16:06.306   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:06.306   11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.875  [2024-12-16 11:37:32.750395] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:16:06.875    "name": "Existed_Raid",
00:16:06.875    "aliases": [
00:16:06.875      "36eea015-cc0d-439b-addd-0d1e492446cd"
00:16:06.875    ],
00:16:06.875    "product_name": "Raid Volume",
00:16:06.875    "block_size": 512,
00:16:06.875    "num_blocks": 190464,
00:16:06.875    "uuid": "36eea015-cc0d-439b-addd-0d1e492446cd",
00:16:06.875    "assigned_rate_limits": {
00:16:06.875      "rw_ios_per_sec": 0,
00:16:06.875      "rw_mbytes_per_sec": 0,
00:16:06.875      "r_mbytes_per_sec": 0,
00:16:06.875      "w_mbytes_per_sec": 0
00:16:06.875    },
00:16:06.875    "claimed": false,
00:16:06.875    "zoned": false,
00:16:06.875    "supported_io_types": {
00:16:06.875      "read": true,
00:16:06.875      "write": true,
00:16:06.875      "unmap": false,
00:16:06.875      "flush": false,
00:16:06.875      "reset": true,
00:16:06.875      "nvme_admin": false,
00:16:06.875      "nvme_io": false,
00:16:06.875      "nvme_io_md": false,
00:16:06.875      "write_zeroes": true,
00:16:06.875      "zcopy": false,
00:16:06.875      "get_zone_info": false,
00:16:06.875      "zone_management": false,
00:16:06.875      "zone_append": false,
00:16:06.875      "compare": false,
00:16:06.875      "compare_and_write": false,
00:16:06.875      "abort": false,
00:16:06.875      "seek_hole": false,
00:16:06.875      "seek_data": false,
00:16:06.875      "copy": false,
00:16:06.875      "nvme_iov_md": false
00:16:06.875    },
00:16:06.875    "driver_specific": {
00:16:06.875      "raid": {
00:16:06.875        "uuid": "36eea015-cc0d-439b-addd-0d1e492446cd",
00:16:06.875        "strip_size_kb": 64,
00:16:06.875        "state": "online",
00:16:06.875        "raid_level": "raid5f",
00:16:06.875        "superblock": true,
00:16:06.875        "num_base_bdevs": 4,
00:16:06.875        "num_base_bdevs_discovered": 4,
00:16:06.875        "num_base_bdevs_operational": 4,
00:16:06.875        "base_bdevs_list": [
00:16:06.875          {
00:16:06.875            "name": "BaseBdev1",
00:16:06.875            "uuid": "68687d83-aedb-4a7c-a632-2e167996b01a",
00:16:06.875            "is_configured": true,
00:16:06.875            "data_offset": 2048,
00:16:06.875            "data_size": 63488
00:16:06.875          },
00:16:06.875          {
00:16:06.875            "name": "BaseBdev2",
00:16:06.875            "uuid": "2647d423-ff9d-4500-9380-27701d38cc15",
00:16:06.875            "is_configured": true,
00:16:06.875            "data_offset": 2048,
00:16:06.875            "data_size": 63488
00:16:06.875          },
00:16:06.875          {
00:16:06.875            "name": "BaseBdev3",
00:16:06.875            "uuid": "d4824ed3-2c85-4183-afaa-96474f1c50b6",
00:16:06.875            "is_configured": true,
00:16:06.875            "data_offset": 2048,
00:16:06.875            "data_size": 63488
00:16:06.875          },
00:16:06.875          {
00:16:06.875            "name": "BaseBdev4",
00:16:06.875            "uuid": "f6e347c7-9ad8-4cd5-94bf-e5f93a0e4f2d",
00:16:06.875            "is_configured": true,
00:16:06.875            "data_offset": 2048,
00:16:06.875            "data_size": 63488
00:16:06.875          }
00:16:06.875        ]
00:16:06.875      }
00:16:06.875    }
00:16:06.875  }'
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:16:06.875  BaseBdev2
00:16:06.875  BaseBdev3
00:16:06.875  BaseBdev4'
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:06.875   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:06.875    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.135   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:07.135   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:07.135   11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:07.135    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:07.135    11:37:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:16:07.135    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.135    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.135    11:37:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.135  [2024-12-16 11:37:33.073700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:07.135    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:07.135    "name": "Existed_Raid",
00:16:07.135    "uuid": "36eea015-cc0d-439b-addd-0d1e492446cd",
00:16:07.135    "strip_size_kb": 64,
00:16:07.135    "state": "online",
00:16:07.135    "raid_level": "raid5f",
00:16:07.135    "superblock": true,
00:16:07.135    "num_base_bdevs": 4,
00:16:07.135    "num_base_bdevs_discovered": 3,
00:16:07.135    "num_base_bdevs_operational": 3,
00:16:07.135    "base_bdevs_list": [
00:16:07.135      {
00:16:07.135        "name": null,
00:16:07.135        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:07.135        "is_configured": false,
00:16:07.135        "data_offset": 0,
00:16:07.135        "data_size": 63488
00:16:07.135      },
00:16:07.135      {
00:16:07.135        "name": "BaseBdev2",
00:16:07.135        "uuid": "2647d423-ff9d-4500-9380-27701d38cc15",
00:16:07.135        "is_configured": true,
00:16:07.135        "data_offset": 2048,
00:16:07.135        "data_size": 63488
00:16:07.135      },
00:16:07.135      {
00:16:07.135        "name": "BaseBdev3",
00:16:07.135        "uuid": "d4824ed3-2c85-4183-afaa-96474f1c50b6",
00:16:07.135        "is_configured": true,
00:16:07.135        "data_offset": 2048,
00:16:07.135        "data_size": 63488
00:16:07.135      },
00:16:07.135      {
00:16:07.135        "name": "BaseBdev4",
00:16:07.135        "uuid": "f6e347c7-9ad8-4cd5-94bf-e5f93a0e4f2d",
00:16:07.135        "is_configured": true,
00:16:07.135        "data_offset": 2048,
00:16:07.135        "data_size": 63488
00:16:07.135      }
00:16:07.135    ]
00:16:07.135  }'
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:07.135   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.703  [2024-12-16 11:37:33.602011] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:07.703  [2024-12-16 11:37:33.602313] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:07.703  [2024-12-16 11:37:33.624230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.703  [2024-12-16 11:37:33.680198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:16:07.703   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:16:07.703    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.962   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963  [2024-12-16 11:37:33.781523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:16:07.963  [2024-12-16 11:37:33.781693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:16:07.963    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:07.963    11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:16:07.963    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963    11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']'
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 ))
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963  BaseBdev2
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963  [
00:16:07.963  {
00:16:07.963  "name": "BaseBdev2",
00:16:07.963  "aliases": [
00:16:07.963  "7e845603-5b15-4cd1-9424-66d1540f21c8"
00:16:07.963  ],
00:16:07.963  "product_name": "Malloc disk",
00:16:07.963  "block_size": 512,
00:16:07.963  "num_blocks": 65536,
00:16:07.963  "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:07.963  "assigned_rate_limits": {
00:16:07.963  "rw_ios_per_sec": 0,
00:16:07.963  "rw_mbytes_per_sec": 0,
00:16:07.963  "r_mbytes_per_sec": 0,
00:16:07.963  "w_mbytes_per_sec": 0
00:16:07.963  },
00:16:07.963  "claimed": false,
00:16:07.963  "zoned": false,
00:16:07.963  "supported_io_types": {
00:16:07.963  "read": true,
00:16:07.963  "write": true,
00:16:07.963  "unmap": true,
00:16:07.963  "flush": true,
00:16:07.963  "reset": true,
00:16:07.963  "nvme_admin": false,
00:16:07.963  "nvme_io": false,
00:16:07.963  "nvme_io_md": false,
00:16:07.963  "write_zeroes": true,
00:16:07.963  "zcopy": true,
00:16:07.963  "get_zone_info": false,
00:16:07.963  "zone_management": false,
00:16:07.963  "zone_append": false,
00:16:07.963  "compare": false,
00:16:07.963  "compare_and_write": false,
00:16:07.963  "abort": true,
00:16:07.963  "seek_hole": false,
00:16:07.963  "seek_data": false,
00:16:07.963  "copy": true,
00:16:07.963  "nvme_iov_md": false
00:16:07.963  },
00:16:07.963  "memory_domains": [
00:16:07.963  {
00:16:07.963  "dma_device_id": "system",
00:16:07.963  "dma_device_type": 1
00:16:07.963  },
00:16:07.963  {
00:16:07.963  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:07.963  "dma_device_type": 2
00:16:07.963  }
00:16:07.963  ],
00:16:07.963  "driver_specific": {}
00:16:07.963  }
00:16:07.963  ]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963  BaseBdev3
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963  [
00:16:07.963  {
00:16:07.963  "name": "BaseBdev3",
00:16:07.963  "aliases": [
00:16:07.963  "92607808-a4b5-4934-ad23-01ab6478b90c"
00:16:07.963  ],
00:16:07.963  "product_name": "Malloc disk",
00:16:07.963  "block_size": 512,
00:16:07.963  "num_blocks": 65536,
00:16:07.963  "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:07.963  "assigned_rate_limits": {
00:16:07.963  "rw_ios_per_sec": 0,
00:16:07.963  "rw_mbytes_per_sec": 0,
00:16:07.963  "r_mbytes_per_sec": 0,
00:16:07.963  "w_mbytes_per_sec": 0
00:16:07.963  },
00:16:07.963  "claimed": false,
00:16:07.963  "zoned": false,
00:16:07.963  "supported_io_types": {
00:16:07.963  "read": true,
00:16:07.963  "write": true,
00:16:07.963  "unmap": true,
00:16:07.963  "flush": true,
00:16:07.963  "reset": true,
00:16:07.963  "nvme_admin": false,
00:16:07.963  "nvme_io": false,
00:16:07.963  "nvme_io_md": false,
00:16:07.963  "write_zeroes": true,
00:16:07.963  "zcopy": true,
00:16:07.963  "get_zone_info": false,
00:16:07.963  "zone_management": false,
00:16:07.963  "zone_append": false,
00:16:07.963  "compare": false,
00:16:07.963  "compare_and_write": false,
00:16:07.963  "abort": true,
00:16:07.963  "seek_hole": false,
00:16:07.963  "seek_data": false,
00:16:07.963  "copy": true,
00:16:07.963  "nvme_iov_md": false
00:16:07.963  },
00:16:07.963  "memory_domains": [
00:16:07.963  {
00:16:07.963  "dma_device_id": "system",
00:16:07.963  "dma_device_type": 1
00:16:07.963  },
00:16:07.963  {
00:16:07.963  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:07.963  "dma_device_type": 2
00:16:07.963  }
00:16:07.963  ],
00:16:07.963  "driver_specific": {}
00:16:07.963  }
00:16:07.963  ]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.963  BaseBdev4
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.963   11:37:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4
00:16:07.963   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4
00:16:07.963   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:07.963   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:07.964   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:07.964  [
00:16:07.964  {
00:16:07.964  "name": "BaseBdev4",
00:16:07.964  "aliases": [
00:16:07.964  "9addba13-3c85-41c7-b415-a43c5de33229"
00:16:07.964  ],
00:16:07.964  "product_name": "Malloc disk",
00:16:07.964  "block_size": 512,
00:16:07.964  "num_blocks": 65536,
00:16:07.964  "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:07.964  "assigned_rate_limits": {
00:16:07.964  "rw_ios_per_sec": 0,
00:16:07.964  "rw_mbytes_per_sec": 0,
00:16:07.964  "r_mbytes_per_sec": 0,
00:16:07.964  "w_mbytes_per_sec": 0
00:16:07.964  },
00:16:07.964  "claimed": false,
00:16:07.964  "zoned": false,
00:16:07.964  "supported_io_types": {
00:16:07.964  "read": true,
00:16:07.964  "write": true,
00:16:07.964  "unmap": true,
00:16:07.964  "flush": true,
00:16:08.223  "reset": true,
00:16:08.223  "nvme_admin": false,
00:16:08.223  "nvme_io": false,
00:16:08.223  "nvme_io_md": false,
00:16:08.223  "write_zeroes": true,
00:16:08.223  "zcopy": true,
00:16:08.223  "get_zone_info": false,
00:16:08.223  "zone_management": false,
00:16:08.223  "zone_append": false,
00:16:08.223  "compare": false,
00:16:08.223  "compare_and_write": false,
00:16:08.223  "abort": true,
00:16:08.223  "seek_hole": false,
00:16:08.223  "seek_data": false,
00:16:08.223  "copy": true,
00:16:08.223  "nvme_iov_md": false
00:16:08.223  },
00:16:08.223  "memory_domains": [
00:16:08.223  {
00:16:08.223  "dma_device_id": "system",
00:16:08.223  "dma_device_type": 1
00:16:08.223  },
00:16:08.223  {
00:16:08.223  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:08.223  "dma_device_type": 2
00:16:08.223  }
00:16:08.223  ],
00:16:08.223  "driver_specific": {}
00:16:08.223  }
00:16:08.223  ]
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ ))
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs ))
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:08.223  [2024-12-16 11:37:34.042892] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:08.223  [2024-12-16 11:37:34.043031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:08.223  [2024-12-16 11:37:34.043087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:08.223  [2024-12-16 11:37:34.045645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:08.223  [2024-12-16 11:37:34.045754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:08.223    11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:08.223    11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:08.223    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:08.223    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:08.223    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.223   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:08.223    "name": "Existed_Raid",
00:16:08.223    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:08.223    "strip_size_kb": 64,
00:16:08.223    "state": "configuring",
00:16:08.223    "raid_level": "raid5f",
00:16:08.223    "superblock": true,
00:16:08.223    "num_base_bdevs": 4,
00:16:08.223    "num_base_bdevs_discovered": 3,
00:16:08.223    "num_base_bdevs_operational": 4,
00:16:08.223    "base_bdevs_list": [
00:16:08.223      {
00:16:08.224        "name": "BaseBdev1",
00:16:08.224        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:08.224        "is_configured": false,
00:16:08.224        "data_offset": 0,
00:16:08.224        "data_size": 0
00:16:08.224      },
00:16:08.224      {
00:16:08.224        "name": "BaseBdev2",
00:16:08.224        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:08.224        "is_configured": true,
00:16:08.224        "data_offset": 2048,
00:16:08.224        "data_size": 63488
00:16:08.224      },
00:16:08.224      {
00:16:08.224        "name": "BaseBdev3",
00:16:08.224        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:08.224        "is_configured": true,
00:16:08.224        "data_offset": 2048,
00:16:08.224        "data_size": 63488
00:16:08.224      },
00:16:08.224      {
00:16:08.224        "name": "BaseBdev4",
00:16:08.224        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:08.224        "is_configured": true,
00:16:08.224        "data_offset": 2048,
00:16:08.224        "data_size": 63488
00:16:08.224      }
00:16:08.224    ]
00:16:08.224  }'
00:16:08.224   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:08.224   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:08.483  [2024-12-16 11:37:34.426267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:08.483    11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:08.483    11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:08.483    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:08.483    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:08.483    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:08.483   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:08.483    "name": "Existed_Raid",
00:16:08.483    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:08.483    "strip_size_kb": 64,
00:16:08.483    "state": "configuring",
00:16:08.483    "raid_level": "raid5f",
00:16:08.483    "superblock": true,
00:16:08.483    "num_base_bdevs": 4,
00:16:08.483    "num_base_bdevs_discovered": 2,
00:16:08.483    "num_base_bdevs_operational": 4,
00:16:08.483    "base_bdevs_list": [
00:16:08.483      {
00:16:08.483        "name": "BaseBdev1",
00:16:08.483        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:08.483        "is_configured": false,
00:16:08.483        "data_offset": 0,
00:16:08.483        "data_size": 0
00:16:08.484      },
00:16:08.484      {
00:16:08.484        "name": null,
00:16:08.484        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:08.484        "is_configured": false,
00:16:08.484        "data_offset": 0,
00:16:08.484        "data_size": 63488
00:16:08.484      },
00:16:08.484      {
00:16:08.484        "name": "BaseBdev3",
00:16:08.484        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:08.484        "is_configured": true,
00:16:08.484        "data_offset": 2048,
00:16:08.484        "data_size": 63488
00:16:08.484      },
00:16:08.484      {
00:16:08.484        "name": "BaseBdev4",
00:16:08.484        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:08.484        "is_configured": true,
00:16:08.484        "data_offset": 2048,
00:16:08.484        "data_size": 63488
00:16:08.484      }
00:16:08.484    ]
00:16:08.484  }'
00:16:08.484   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:08.484   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.053    11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:09.053    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.053    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.053    11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:16:09.053    11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]]
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.053  BaseBdev1
00:16:09.053  [2024-12-16 11:37:34.986908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:09.053   11:37:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.053   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.053  [
00:16:09.053  {
00:16:09.053  "name": "BaseBdev1",
00:16:09.053  "aliases": [
00:16:09.053  "3548a600-02af-4448-9838-acf9a017337f"
00:16:09.053  ],
00:16:09.053  "product_name": "Malloc disk",
00:16:09.053  "block_size": 512,
00:16:09.053  "num_blocks": 65536,
00:16:09.053  "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:09.053  "assigned_rate_limits": {
00:16:09.053  "rw_ios_per_sec": 0,
00:16:09.053  "rw_mbytes_per_sec": 0,
00:16:09.053  "r_mbytes_per_sec": 0,
00:16:09.053  "w_mbytes_per_sec": 0
00:16:09.053  },
00:16:09.053  "claimed": true,
00:16:09.053  "claim_type": "exclusive_write",
00:16:09.053  "zoned": false,
00:16:09.053  "supported_io_types": {
00:16:09.053  "read": true,
00:16:09.053  "write": true,
00:16:09.053  "unmap": true,
00:16:09.053  "flush": true,
00:16:09.053  "reset": true,
00:16:09.053  "nvme_admin": false,
00:16:09.054  "nvme_io": false,
00:16:09.054  "nvme_io_md": false,
00:16:09.054  "write_zeroes": true,
00:16:09.054  "zcopy": true,
00:16:09.054  "get_zone_info": false,
00:16:09.054  "zone_management": false,
00:16:09.054  "zone_append": false,
00:16:09.054  "compare": false,
00:16:09.054  "compare_and_write": false,
00:16:09.054  "abort": true,
00:16:09.054  "seek_hole": false,
00:16:09.054  "seek_data": false,
00:16:09.054  "copy": true,
00:16:09.054  "nvme_iov_md": false
00:16:09.054  },
00:16:09.054  "memory_domains": [
00:16:09.054  {
00:16:09.054  "dma_device_id": "system",
00:16:09.054  "dma_device_type": 1
00:16:09.054  },
00:16:09.054  {
00:16:09.054  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:09.054  "dma_device_type": 2
00:16:09.054  }
00:16:09.054  ],
00:16:09.054  "driver_specific": {}
00:16:09.054  }
00:16:09.054  ]
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:09.054    11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:09.054    11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:09.054    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.054    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.054    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:09.054    "name": "Existed_Raid",
00:16:09.054    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:09.054    "strip_size_kb": 64,
00:16:09.054    "state": "configuring",
00:16:09.054    "raid_level": "raid5f",
00:16:09.054    "superblock": true,
00:16:09.054    "num_base_bdevs": 4,
00:16:09.054    "num_base_bdevs_discovered": 3,
00:16:09.054    "num_base_bdevs_operational": 4,
00:16:09.054    "base_bdevs_list": [
00:16:09.054      {
00:16:09.054        "name": "BaseBdev1",
00:16:09.054        "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:09.054        "is_configured": true,
00:16:09.054        "data_offset": 2048,
00:16:09.054        "data_size": 63488
00:16:09.054      },
00:16:09.054      {
00:16:09.054        "name": null,
00:16:09.054        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:09.054        "is_configured": false,
00:16:09.054        "data_offset": 0,
00:16:09.054        "data_size": 63488
00:16:09.054      },
00:16:09.054      {
00:16:09.054        "name": "BaseBdev3",
00:16:09.054        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:09.054        "is_configured": true,
00:16:09.054        "data_offset": 2048,
00:16:09.054        "data_size": 63488
00:16:09.054      },
00:16:09.054      {
00:16:09.054        "name": "BaseBdev4",
00:16:09.054        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:09.054        "is_configured": true,
00:16:09.054        "data_offset": 2048,
00:16:09.054        "data_size": 63488
00:16:09.054      }
00:16:09.054    ]
00:16:09.054  }'
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:09.054   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]]
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.619  [2024-12-16 11:37:35.594012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:09.619    11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:09.619    "name": "Existed_Raid",
00:16:09.619    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:09.619    "strip_size_kb": 64,
00:16:09.619    "state": "configuring",
00:16:09.619    "raid_level": "raid5f",
00:16:09.619    "superblock": true,
00:16:09.619    "num_base_bdevs": 4,
00:16:09.619    "num_base_bdevs_discovered": 2,
00:16:09.619    "num_base_bdevs_operational": 4,
00:16:09.619    "base_bdevs_list": [
00:16:09.619      {
00:16:09.619        "name": "BaseBdev1",
00:16:09.619        "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:09.619        "is_configured": true,
00:16:09.619        "data_offset": 2048,
00:16:09.619        "data_size": 63488
00:16:09.619      },
00:16:09.619      {
00:16:09.619        "name": null,
00:16:09.619        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:09.619        "is_configured": false,
00:16:09.619        "data_offset": 0,
00:16:09.619        "data_size": 63488
00:16:09.619      },
00:16:09.619      {
00:16:09.619        "name": null,
00:16:09.619        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:09.619        "is_configured": false,
00:16:09.619        "data_offset": 0,
00:16:09.619        "data_size": 63488
00:16:09.619      },
00:16:09.619      {
00:16:09.619        "name": "BaseBdev4",
00:16:09.619        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:09.619        "is_configured": true,
00:16:09.619        "data_offset": 2048,
00:16:09.619        "data_size": 63488
00:16:09.619      }
00:16:09.619    ]
00:16:09.619  }'
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:09.619   11:37:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.186    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:10.186    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:10.186    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.186    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:16:10.186    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:10.186   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]]
00:16:10.186   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3
00:16:10.186   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:10.186   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.186  [2024-12-16 11:37:36.109182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:10.186   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:10.186   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:10.187    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:10.187    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:10.187    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:10.187    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.187    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:10.187    "name": "Existed_Raid",
00:16:10.187    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:10.187    "strip_size_kb": 64,
00:16:10.187    "state": "configuring",
00:16:10.187    "raid_level": "raid5f",
00:16:10.187    "superblock": true,
00:16:10.187    "num_base_bdevs": 4,
00:16:10.187    "num_base_bdevs_discovered": 3,
00:16:10.187    "num_base_bdevs_operational": 4,
00:16:10.187    "base_bdevs_list": [
00:16:10.187      {
00:16:10.187        "name": "BaseBdev1",
00:16:10.187        "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:10.187        "is_configured": true,
00:16:10.187        "data_offset": 2048,
00:16:10.187        "data_size": 63488
00:16:10.187      },
00:16:10.187      {
00:16:10.187        "name": null,
00:16:10.187        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:10.187        "is_configured": false,
00:16:10.187        "data_offset": 0,
00:16:10.187        "data_size": 63488
00:16:10.187      },
00:16:10.187      {
00:16:10.187        "name": "BaseBdev3",
00:16:10.187        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:10.187        "is_configured": true,
00:16:10.187        "data_offset": 2048,
00:16:10.187        "data_size": 63488
00:16:10.187      },
00:16:10.187      {
00:16:10.187        "name": "BaseBdev4",
00:16:10.187        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:10.187        "is_configured": true,
00:16:10.187        "data_offset": 2048,
00:16:10.187        "data_size": 63488
00:16:10.187      }
00:16:10.187    ]
00:16:10.187  }'
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:10.187   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured'
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]]
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.755  [2024-12-16 11:37:36.592369] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:10.755    11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:10.755    "name": "Existed_Raid",
00:16:10.755    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:10.755    "strip_size_kb": 64,
00:16:10.755    "state": "configuring",
00:16:10.755    "raid_level": "raid5f",
00:16:10.755    "superblock": true,
00:16:10.755    "num_base_bdevs": 4,
00:16:10.755    "num_base_bdevs_discovered": 2,
00:16:10.755    "num_base_bdevs_operational": 4,
00:16:10.755    "base_bdevs_list": [
00:16:10.755      {
00:16:10.755        "name": null,
00:16:10.755        "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:10.755        "is_configured": false,
00:16:10.755        "data_offset": 0,
00:16:10.755        "data_size": 63488
00:16:10.755      },
00:16:10.755      {
00:16:10.755        "name": null,
00:16:10.755        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:10.755        "is_configured": false,
00:16:10.755        "data_offset": 0,
00:16:10.755        "data_size": 63488
00:16:10.755      },
00:16:10.755      {
00:16:10.755        "name": "BaseBdev3",
00:16:10.755        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:10.755        "is_configured": true,
00:16:10.755        "data_offset": 2048,
00:16:10.755        "data_size": 63488
00:16:10.755      },
00:16:10.755      {
00:16:10.755        "name": "BaseBdev4",
00:16:10.755        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:10.755        "is_configured": true,
00:16:10.755        "data_offset": 2048,
00:16:10.755        "data_size": 63488
00:16:10.755      }
00:16:10.755    ]
00:16:10.755  }'
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:10.755   11:37:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.015    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:11.015    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.015    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.015    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured'
00:16:11.015    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.015   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]]
00:16:11.015   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2
00:16:11.015   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.274  [2024-12-16 11:37:37.084404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:11.274    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:11.274    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:11.274    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.274    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.274    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.274   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:11.274    "name": "Existed_Raid",
00:16:11.274    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:11.274    "strip_size_kb": 64,
00:16:11.274    "state": "configuring",
00:16:11.274    "raid_level": "raid5f",
00:16:11.274    "superblock": true,
00:16:11.274    "num_base_bdevs": 4,
00:16:11.274    "num_base_bdevs_discovered": 3,
00:16:11.275    "num_base_bdevs_operational": 4,
00:16:11.275    "base_bdevs_list": [
00:16:11.275      {
00:16:11.275        "name": null,
00:16:11.275        "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:11.275        "is_configured": false,
00:16:11.275        "data_offset": 0,
00:16:11.275        "data_size": 63488
00:16:11.275      },
00:16:11.275      {
00:16:11.275        "name": "BaseBdev2",
00:16:11.275        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:11.275        "is_configured": true,
00:16:11.275        "data_offset": 2048,
00:16:11.275        "data_size": 63488
00:16:11.275      },
00:16:11.275      {
00:16:11.275        "name": "BaseBdev3",
00:16:11.275        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:11.275        "is_configured": true,
00:16:11.275        "data_offset": 2048,
00:16:11.275        "data_size": 63488
00:16:11.275      },
00:16:11.275      {
00:16:11.275        "name": "BaseBdev4",
00:16:11.275        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:11.275        "is_configured": true,
00:16:11.275        "data_offset": 2048,
00:16:11.275        "data_size": 63488
00:16:11.275      }
00:16:11.275    ]
00:16:11.275  }'
00:16:11.275   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:11.275   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured'
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.534   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]]
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid'
00:16:11.534    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.534   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3548a600-02af-4448-9838-acf9a017337f
00:16:11.534   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.534   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.793  [2024-12-16 11:37:37.617052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed
00:16:11.793  NewBaseBdev
00:16:11.793  [2024-12-16 11:37:37.617381] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:16:11.793  [2024-12-16 11:37:37.617400] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:11.793  [2024-12-16 11:37:37.617748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220
00:16:11.793  [2024-12-16 11:37:37.618259] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:16:11.793  [2024-12-16 11:37:37.618294] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00
00:16:11.793  [2024-12-16 11:37:37.618415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.793   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.794  [
00:16:11.794  {
00:16:11.794  "name": "NewBaseBdev",
00:16:11.794  "aliases": [
00:16:11.794  "3548a600-02af-4448-9838-acf9a017337f"
00:16:11.794  ],
00:16:11.794  "product_name": "Malloc disk",
00:16:11.794  "block_size": 512,
00:16:11.794  "num_blocks": 65536,
00:16:11.794  "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:11.794  "assigned_rate_limits": {
00:16:11.794  "rw_ios_per_sec": 0,
00:16:11.794  "rw_mbytes_per_sec": 0,
00:16:11.794  "r_mbytes_per_sec": 0,
00:16:11.794  "w_mbytes_per_sec": 0
00:16:11.794  },
00:16:11.794  "claimed": true,
00:16:11.794  "claim_type": "exclusive_write",
00:16:11.794  "zoned": false,
00:16:11.794  "supported_io_types": {
00:16:11.794  "read": true,
00:16:11.794  "write": true,
00:16:11.794  "unmap": true,
00:16:11.794  "flush": true,
00:16:11.794  "reset": true,
00:16:11.794  "nvme_admin": false,
00:16:11.794  "nvme_io": false,
00:16:11.794  "nvme_io_md": false,
00:16:11.794  "write_zeroes": true,
00:16:11.794  "zcopy": true,
00:16:11.794  "get_zone_info": false,
00:16:11.794  "zone_management": false,
00:16:11.794  "zone_append": false,
00:16:11.794  "compare": false,
00:16:11.794  "compare_and_write": false,
00:16:11.794  "abort": true,
00:16:11.794  "seek_hole": false,
00:16:11.794  "seek_data": false,
00:16:11.794  "copy": true,
00:16:11.794  "nvme_iov_md": false
00:16:11.794  },
00:16:11.794  "memory_domains": [
00:16:11.794  {
00:16:11.794  "dma_device_id": "system",
00:16:11.794  "dma_device_type": 1
00:16:11.794  },
00:16:11.794  {
00:16:11.794  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:11.794  "dma_device_type": 2
00:16:11.794  }
00:16:11.794  ],
00:16:11.794  "driver_specific": {}
00:16:11.794  }
00:16:11.794  ]
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:11.794    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:11.794    11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:11.794    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:11.794    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:11.794    11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:11.794    "name": "Existed_Raid",
00:16:11.794    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:11.794    "strip_size_kb": 64,
00:16:11.794    "state": "online",
00:16:11.794    "raid_level": "raid5f",
00:16:11.794    "superblock": true,
00:16:11.794    "num_base_bdevs": 4,
00:16:11.794    "num_base_bdevs_discovered": 4,
00:16:11.794    "num_base_bdevs_operational": 4,
00:16:11.794    "base_bdevs_list": [
00:16:11.794      {
00:16:11.794        "name": "NewBaseBdev",
00:16:11.794        "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:11.794        "is_configured": true,
00:16:11.794        "data_offset": 2048,
00:16:11.794        "data_size": 63488
00:16:11.794      },
00:16:11.794      {
00:16:11.794        "name": "BaseBdev2",
00:16:11.794        "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:11.794        "is_configured": true,
00:16:11.794        "data_offset": 2048,
00:16:11.794        "data_size": 63488
00:16:11.794      },
00:16:11.794      {
00:16:11.794        "name": "BaseBdev3",
00:16:11.794        "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:11.794        "is_configured": true,
00:16:11.794        "data_offset": 2048,
00:16:11.794        "data_size": 63488
00:16:11.794      },
00:16:11.794      {
00:16:11.794        "name": "BaseBdev4",
00:16:11.794        "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:11.794        "is_configured": true,
00:16:11.794        "data_offset": 2048,
00:16:11.794        "data_size": 63488
00:16:11.794      }
00:16:11.794    ]
00:16:11.794  }'
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:11.794   11:37:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.052   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid
00:16:12.052   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:16:12.052   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:16:12.052   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:16:12.052   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name
00:16:12.052   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:16:12.052    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:16:12.052    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:16:12.052    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:12.052    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.052  [2024-12-16 11:37:38.104851] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:16:12.312    "name": "Existed_Raid",
00:16:12.312    "aliases": [
00:16:12.312      "df2dfca1-a776-4b2a-8856-362993ff4944"
00:16:12.312    ],
00:16:12.312    "product_name": "Raid Volume",
00:16:12.312    "block_size": 512,
00:16:12.312    "num_blocks": 190464,
00:16:12.312    "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:12.312    "assigned_rate_limits": {
00:16:12.312      "rw_ios_per_sec": 0,
00:16:12.312      "rw_mbytes_per_sec": 0,
00:16:12.312      "r_mbytes_per_sec": 0,
00:16:12.312      "w_mbytes_per_sec": 0
00:16:12.312    },
00:16:12.312    "claimed": false,
00:16:12.312    "zoned": false,
00:16:12.312    "supported_io_types": {
00:16:12.312      "read": true,
00:16:12.312      "write": true,
00:16:12.312      "unmap": false,
00:16:12.312      "flush": false,
00:16:12.312      "reset": true,
00:16:12.312      "nvme_admin": false,
00:16:12.312      "nvme_io": false,
00:16:12.312      "nvme_io_md": false,
00:16:12.312      "write_zeroes": true,
00:16:12.312      "zcopy": false,
00:16:12.312      "get_zone_info": false,
00:16:12.312      "zone_management": false,
00:16:12.312      "zone_append": false,
00:16:12.312      "compare": false,
00:16:12.312      "compare_and_write": false,
00:16:12.312      "abort": false,
00:16:12.312      "seek_hole": false,
00:16:12.312      "seek_data": false,
00:16:12.312      "copy": false,
00:16:12.312      "nvme_iov_md": false
00:16:12.312    },
00:16:12.312    "driver_specific": {
00:16:12.312      "raid": {
00:16:12.312        "uuid": "df2dfca1-a776-4b2a-8856-362993ff4944",
00:16:12.312        "strip_size_kb": 64,
00:16:12.312        "state": "online",
00:16:12.312        "raid_level": "raid5f",
00:16:12.312        "superblock": true,
00:16:12.312        "num_base_bdevs": 4,
00:16:12.312        "num_base_bdevs_discovered": 4,
00:16:12.312        "num_base_bdevs_operational": 4,
00:16:12.312        "base_bdevs_list": [
00:16:12.312          {
00:16:12.312            "name": "NewBaseBdev",
00:16:12.312            "uuid": "3548a600-02af-4448-9838-acf9a017337f",
00:16:12.312            "is_configured": true,
00:16:12.312            "data_offset": 2048,
00:16:12.312            "data_size": 63488
00:16:12.312          },
00:16:12.312          {
00:16:12.312            "name": "BaseBdev2",
00:16:12.312            "uuid": "7e845603-5b15-4cd1-9424-66d1540f21c8",
00:16:12.312            "is_configured": true,
00:16:12.312            "data_offset": 2048,
00:16:12.312            "data_size": 63488
00:16:12.312          },
00:16:12.312          {
00:16:12.312            "name": "BaseBdev3",
00:16:12.312            "uuid": "92607808-a4b5-4934-ad23-01ab6478b90c",
00:16:12.312            "is_configured": true,
00:16:12.312            "data_offset": 2048,
00:16:12.312            "data_size": 63488
00:16:12.312          },
00:16:12.312          {
00:16:12.312            "name": "BaseBdev4",
00:16:12.312            "uuid": "9addba13-3c85-41c7-b415-a43c5de33229",
00:16:12.312            "is_configured": true,
00:16:12.312            "data_offset": 2048,
00:16:12.312            "data_size": 63488
00:16:12.312          }
00:16:12.312        ]
00:16:12.312      }
00:16:12.312    }
00:16:12.312  }'
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev
00:16:12.312  BaseBdev2
00:16:12.312  BaseBdev3
00:16:12.312  BaseBdev4'
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:12.312   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3
00:16:12.312    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:12.313    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:12.313    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.313    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:12.313   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:12.313   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:12.313   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:12.571    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4
00:16:12.571    11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:12.571    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:12.571    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.571    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.571  [2024-12-16 11:37:38.424025] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:12.571  [2024-12-16 11:37:38.424156] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:12.571  [2024-12-16 11:37:38.424303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:12.571  [2024-12-16 11:37:38.424690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:12.571  [2024-12-16 11:37:38.424752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94270
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94270 ']'
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94270
00:16:12.571    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:16:12.571    11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94270
00:16:12.571  killing process with pid 94270
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94270'
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94270
00:16:12.571  [2024-12-16 11:37:38.467511] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:12.571   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94270
00:16:12.571  [2024-12-16 11:37:38.545370] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:12.829   11:37:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0
00:16:12.829  
00:16:12.829  real	0m9.885s
00:16:12.829  user	0m16.612s
00:16:12.829  sys	0m2.153s
00:16:12.829   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:16:12.829  ************************************
00:16:12.829  END TEST raid5f_state_function_test_sb
00:16:12.829  ************************************
00:16:12.829   11:37:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:12.829   11:37:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4
00:16:12.829   11:37:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:16:12.829   11:37:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:16:12.829   11:37:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:16:12.829  ************************************
00:16:12.829  START TEST raid5f_superblock_test
00:16:12.829  ************************************
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']'
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64'
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94919
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94919
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94919 ']'
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:12.829  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:16:12.829   11:37:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.087  [2024-12-16 11:37:38.962299] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:16:13.087  [2024-12-16 11:37:38.962562] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94919 ]
00:16:13.087  [2024-12-16 11:37:39.124558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:13.346  [2024-12-16 11:37:39.173301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:16:13.346  [2024-12-16 11:37:39.216352] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:13.346  [2024-12-16 11:37:39.216476] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.914  malloc1
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.914  [2024-12-16 11:37:39.826773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:13.914  [2024-12-16 11:37:39.826909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:13.914  [2024-12-16 11:37:39.826950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:16:13.914  [2024-12-16 11:37:39.826984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:13.914  [2024-12-16 11:37:39.829243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:13.914  [2024-12-16 11:37:39.829315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:13.914  pt1
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:16:13.914   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915  malloc2
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915  [2024-12-16 11:37:39.869992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:13.915  [2024-12-16 11:37:39.870089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:13.915  [2024-12-16 11:37:39.870138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:16:13.915  [2024-12-16 11:37:39.870168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:13.915  [2024-12-16 11:37:39.872326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:13.915  [2024-12-16 11:37:39.872401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:13.915  pt2
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915  malloc3
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915  [2024-12-16 11:37:39.898448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:13.915  [2024-12-16 11:37:39.898564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:13.915  [2024-12-16 11:37:39.898600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:16:13.915  [2024-12-16 11:37:39.898632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:13.915  [2024-12-16 11:37:39.900773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:13.915  [2024-12-16 11:37:39.900844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:13.915  pt3
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915  malloc4
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915  [2024-12-16 11:37:39.930874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:16:13.915  [2024-12-16 11:37:39.930965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:13.915  [2024-12-16 11:37:39.930995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:16:13.915  [2024-12-16 11:37:39.931048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:13.915  [2024-12-16 11:37:39.933214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:13.915  [2024-12-16 11:37:39.933282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:16:13.915  pt4
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915  [2024-12-16 11:37:39.942926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:13.915  [2024-12-16 11:37:39.944787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:13.915  [2024-12-16 11:37:39.944880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:13.915  [2024-12-16 11:37:39.944960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:16:13.915  [2024-12-16 11:37:39.945175] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:16:13.915  [2024-12-16 11:37:39.945238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:13.915  [2024-12-16 11:37:39.945491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:16:13.915  [2024-12-16 11:37:39.945970] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:16:13.915  [2024-12-16 11:37:39.945988] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:16:13.915  [2024-12-16 11:37:39.946114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:13.915   11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:13.915    11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:13.915    11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:13.915    11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:13.915    11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:13.915    11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.175   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:14.175    "name": "raid_bdev1",
00:16:14.175    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:14.175    "strip_size_kb": 64,
00:16:14.175    "state": "online",
00:16:14.175    "raid_level": "raid5f",
00:16:14.175    "superblock": true,
00:16:14.175    "num_base_bdevs": 4,
00:16:14.175    "num_base_bdevs_discovered": 4,
00:16:14.175    "num_base_bdevs_operational": 4,
00:16:14.175    "base_bdevs_list": [
00:16:14.175      {
00:16:14.175        "name": "pt1",
00:16:14.175        "uuid": "00000000-0000-0000-0000-000000000001",
00:16:14.175        "is_configured": true,
00:16:14.175        "data_offset": 2048,
00:16:14.175        "data_size": 63488
00:16:14.175      },
00:16:14.175      {
00:16:14.175        "name": "pt2",
00:16:14.175        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:14.175        "is_configured": true,
00:16:14.175        "data_offset": 2048,
00:16:14.175        "data_size": 63488
00:16:14.175      },
00:16:14.175      {
00:16:14.175        "name": "pt3",
00:16:14.175        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:14.175        "is_configured": true,
00:16:14.175        "data_offset": 2048,
00:16:14.175        "data_size": 63488
00:16:14.175      },
00:16:14.175      {
00:16:14.175        "name": "pt4",
00:16:14.175        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:14.175        "is_configured": true,
00:16:14.175        "data_offset": 2048,
00:16:14.175        "data_size": 63488
00:16:14.175      }
00:16:14.175    ]
00:16:14.175  }'
00:16:14.175   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:14.175   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:16:14.434    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:16:14.434    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.434    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.434    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:16:14.434  [2024-12-16 11:37:40.375368] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:14.434    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:16:14.434    "name": "raid_bdev1",
00:16:14.434    "aliases": [
00:16:14.434      "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b"
00:16:14.434    ],
00:16:14.434    "product_name": "Raid Volume",
00:16:14.434    "block_size": 512,
00:16:14.434    "num_blocks": 190464,
00:16:14.434    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:14.434    "assigned_rate_limits": {
00:16:14.434      "rw_ios_per_sec": 0,
00:16:14.434      "rw_mbytes_per_sec": 0,
00:16:14.434      "r_mbytes_per_sec": 0,
00:16:14.434      "w_mbytes_per_sec": 0
00:16:14.434    },
00:16:14.434    "claimed": false,
00:16:14.434    "zoned": false,
00:16:14.434    "supported_io_types": {
00:16:14.434      "read": true,
00:16:14.434      "write": true,
00:16:14.434      "unmap": false,
00:16:14.434      "flush": false,
00:16:14.434      "reset": true,
00:16:14.434      "nvme_admin": false,
00:16:14.434      "nvme_io": false,
00:16:14.434      "nvme_io_md": false,
00:16:14.434      "write_zeroes": true,
00:16:14.434      "zcopy": false,
00:16:14.434      "get_zone_info": false,
00:16:14.434      "zone_management": false,
00:16:14.434      "zone_append": false,
00:16:14.434      "compare": false,
00:16:14.434      "compare_and_write": false,
00:16:14.434      "abort": false,
00:16:14.434      "seek_hole": false,
00:16:14.434      "seek_data": false,
00:16:14.434      "copy": false,
00:16:14.434      "nvme_iov_md": false
00:16:14.434    },
00:16:14.434    "driver_specific": {
00:16:14.434      "raid": {
00:16:14.434        "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:14.434        "strip_size_kb": 64,
00:16:14.434        "state": "online",
00:16:14.434        "raid_level": "raid5f",
00:16:14.434        "superblock": true,
00:16:14.434        "num_base_bdevs": 4,
00:16:14.434        "num_base_bdevs_discovered": 4,
00:16:14.434        "num_base_bdevs_operational": 4,
00:16:14.434        "base_bdevs_list": [
00:16:14.434          {
00:16:14.434            "name": "pt1",
00:16:14.434            "uuid": "00000000-0000-0000-0000-000000000001",
00:16:14.434            "is_configured": true,
00:16:14.434            "data_offset": 2048,
00:16:14.434            "data_size": 63488
00:16:14.434          },
00:16:14.434          {
00:16:14.434            "name": "pt2",
00:16:14.434            "uuid": "00000000-0000-0000-0000-000000000002",
00:16:14.434            "is_configured": true,
00:16:14.434            "data_offset": 2048,
00:16:14.434            "data_size": 63488
00:16:14.434          },
00:16:14.434          {
00:16:14.434            "name": "pt3",
00:16:14.434            "uuid": "00000000-0000-0000-0000-000000000003",
00:16:14.434            "is_configured": true,
00:16:14.434            "data_offset": 2048,
00:16:14.434            "data_size": 63488
00:16:14.434          },
00:16:14.434          {
00:16:14.434            "name": "pt4",
00:16:14.434            "uuid": "00000000-0000-0000-0000-000000000004",
00:16:14.434            "is_configured": true,
00:16:14.434            "data_offset": 2048,
00:16:14.434            "data_size": 63488
00:16:14.434          }
00:16:14.434        ]
00:16:14.434      }
00:16:14.434    }
00:16:14.434  }'
00:16:14.434    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:16:14.434   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:16:14.434  pt2
00:16:14.434  pt3
00:16:14.434  pt4'
00:16:14.434    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.693  [2024-12-16 11:37:40.730777] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:14.693    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4c018d97-6ff1-427a-a563-a9cdd0aa1a0b
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4c018d97-6ff1-427a-a563-a9cdd0aa1a0b ']'
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.693   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989  [2024-12-16 11:37:40.758496] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:14.989  [2024-12-16 11:37:40.758581] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:14.989  [2024-12-16 11:37:40.758684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:14.989  [2024-12-16 11:37:40.758833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:14.989  [2024-12-16 11:37:40.758885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:14.989    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.989   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.989  [2024-12-16 11:37:40.906276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:16:14.990  [2024-12-16 11:37:40.908278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:16:14.990  [2024-12-16 11:37:40.908374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:16:14.990  [2024-12-16 11:37:40.908424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:16:14.990  [2024-12-16 11:37:40.908498] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:16:14.990  [2024-12-16 11:37:40.908601] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:16:14.990  [2024-12-16 11:37:40.908663] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3
00:16:14.990  [2024-12-16 11:37:40.908717] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4
00:16:14.990  [2024-12-16 11:37:40.908766] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:14.990  [2024-12-16 11:37:40.908799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:16:14.990  request:
00:16:14.990  {
00:16:14.990  "name": "raid_bdev1",
00:16:14.990  "raid_level": "raid5f",
00:16:14.990  "base_bdevs": [
00:16:14.990  "malloc1",
00:16:14.990  "malloc2",
00:16:14.990  "malloc3",
00:16:14.990  "malloc4"
00:16:14.990  ],
00:16:14.990  "strip_size_kb": 64,
00:16:14.990  "superblock": false,
00:16:14.990  "method": "bdev_raid_create",
00:16:14.990  "req_id": 1
00:16:14.990  }
00:16:14.990  Got JSON-RPC error response
00:16:14.990  response:
00:16:14.990  {
00:16:14.990  "code": -17,
00:16:14.990  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:16:14.990  }
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.990  [2024-12-16 11:37:40.970109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:14.990  [2024-12-16 11:37:40.970213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:14.990  [2024-12-16 11:37:40.970251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:16:14.990  [2024-12-16 11:37:40.970282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:14.990  [2024-12-16 11:37:40.972436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:14.990  [2024-12-16 11:37:40.972509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:14.990  [2024-12-16 11:37:40.972636] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:16:14.990  [2024-12-16 11:37:40.972734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:14.990  pt1
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:14.990   11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:14.990    11:37:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:14.990    11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:14.990   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:14.990    "name": "raid_bdev1",
00:16:14.990    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:14.990    "strip_size_kb": 64,
00:16:14.990    "state": "configuring",
00:16:14.990    "raid_level": "raid5f",
00:16:14.990    "superblock": true,
00:16:14.990    "num_base_bdevs": 4,
00:16:14.990    "num_base_bdevs_discovered": 1,
00:16:14.990    "num_base_bdevs_operational": 4,
00:16:14.990    "base_bdevs_list": [
00:16:14.990      {
00:16:14.990        "name": "pt1",
00:16:14.990        "uuid": "00000000-0000-0000-0000-000000000001",
00:16:14.990        "is_configured": true,
00:16:14.990        "data_offset": 2048,
00:16:14.990        "data_size": 63488
00:16:14.990      },
00:16:14.990      {
00:16:14.990        "name": null,
00:16:14.990        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:14.990        "is_configured": false,
00:16:14.990        "data_offset": 2048,
00:16:14.990        "data_size": 63488
00:16:14.990      },
00:16:14.990      {
00:16:14.990        "name": null,
00:16:14.990        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:14.990        "is_configured": false,
00:16:14.990        "data_offset": 2048,
00:16:14.990        "data_size": 63488
00:16:14.990      },
00:16:14.990      {
00:16:14.990        "name": null,
00:16:14.990        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:14.990        "is_configured": false,
00:16:14.990        "data_offset": 2048,
00:16:14.990        "data_size": 63488
00:16:14.990      }
00:16:14.990    ]
00:16:14.990  }'
00:16:14.990   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:14.990   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']'
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.571  [2024-12-16 11:37:41.377461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:15.571  [2024-12-16 11:37:41.377576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:15.571  [2024-12-16 11:37:41.377616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:16:15.571  [2024-12-16 11:37:41.377645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:15.571  [2024-12-16 11:37:41.378071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:15.571  [2024-12-16 11:37:41.378133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:15.571  [2024-12-16 11:37:41.378249] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:16:15.571  [2024-12-16 11:37:41.378301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:15.571  pt2
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.571  [2024-12-16 11:37:41.389423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:15.571    11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:15.571    11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:15.571    11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:15.571    11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.571    11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:15.571   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:15.571    "name": "raid_bdev1",
00:16:15.571    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:15.571    "strip_size_kb": 64,
00:16:15.571    "state": "configuring",
00:16:15.571    "raid_level": "raid5f",
00:16:15.571    "superblock": true,
00:16:15.571    "num_base_bdevs": 4,
00:16:15.571    "num_base_bdevs_discovered": 1,
00:16:15.571    "num_base_bdevs_operational": 4,
00:16:15.571    "base_bdevs_list": [
00:16:15.571      {
00:16:15.571        "name": "pt1",
00:16:15.571        "uuid": "00000000-0000-0000-0000-000000000001",
00:16:15.571        "is_configured": true,
00:16:15.571        "data_offset": 2048,
00:16:15.571        "data_size": 63488
00:16:15.571      },
00:16:15.571      {
00:16:15.571        "name": null,
00:16:15.571        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:15.571        "is_configured": false,
00:16:15.571        "data_offset": 0,
00:16:15.571        "data_size": 63488
00:16:15.571      },
00:16:15.571      {
00:16:15.571        "name": null,
00:16:15.572        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:15.572        "is_configured": false,
00:16:15.572        "data_offset": 2048,
00:16:15.572        "data_size": 63488
00:16:15.572      },
00:16:15.572      {
00:16:15.572        "name": null,
00:16:15.572        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:15.572        "is_configured": false,
00:16:15.572        "data_offset": 2048,
00:16:15.572        "data_size": 63488
00:16:15.572      }
00:16:15.572    ]
00:16:15.572  }'
00:16:15.572   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:15.572   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.830  [2024-12-16 11:37:41.808717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:15.830  [2024-12-16 11:37:41.808853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:15.830  [2024-12-16 11:37:41.808892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:16:15.830  [2024-12-16 11:37:41.808934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:15.830  [2024-12-16 11:37:41.809414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:15.830  [2024-12-16 11:37:41.809478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:15.830  [2024-12-16 11:37:41.809601] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:16:15.830  [2024-12-16 11:37:41.809660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:15.830  pt2
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.830  [2024-12-16 11:37:41.820655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:15.830  [2024-12-16 11:37:41.820764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:15.830  [2024-12-16 11:37:41.820804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:16:15.830  [2024-12-16 11:37:41.820844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:15.830  [2024-12-16 11:37:41.821234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:15.830  [2024-12-16 11:37:41.821296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:15.830  [2024-12-16 11:37:41.821389] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:16:15.830  [2024-12-16 11:37:41.821443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:15.830  pt3
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.830  [2024-12-16 11:37:41.832653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:16:15.830  [2024-12-16 11:37:41.832751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:15.830  [2024-12-16 11:37:41.832787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:16:15.830  [2024-12-16 11:37:41.832822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:15.830  [2024-12-16 11:37:41.833167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:15.830  [2024-12-16 11:37:41.833223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:16:15.830  [2024-12-16 11:37:41.833315] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:16:15.830  [2024-12-16 11:37:41.833370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:16:15.830  [2024-12-16 11:37:41.833501] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:16:15.830  [2024-12-16 11:37:41.833550] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:15.830  [2024-12-16 11:37:41.833800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:16:15.830  [2024-12-16 11:37:41.834363] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:16:15.830  [2024-12-16 11:37:41.834418] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:16:15.830  [2024-12-16 11:37:41.834585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:15.830  pt4
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:15.830    11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:15.830    11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:15.830    11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:15.830    11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:15.830    11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:15.830    "name": "raid_bdev1",
00:16:15.830    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:15.830    "strip_size_kb": 64,
00:16:15.830    "state": "online",
00:16:15.830    "raid_level": "raid5f",
00:16:15.830    "superblock": true,
00:16:15.830    "num_base_bdevs": 4,
00:16:15.830    "num_base_bdevs_discovered": 4,
00:16:15.830    "num_base_bdevs_operational": 4,
00:16:15.830    "base_bdevs_list": [
00:16:15.830      {
00:16:15.830        "name": "pt1",
00:16:15.830        "uuid": "00000000-0000-0000-0000-000000000001",
00:16:15.830        "is_configured": true,
00:16:15.830        "data_offset": 2048,
00:16:15.830        "data_size": 63488
00:16:15.830      },
00:16:15.830      {
00:16:15.830        "name": "pt2",
00:16:15.830        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:15.830        "is_configured": true,
00:16:15.830        "data_offset": 2048,
00:16:15.830        "data_size": 63488
00:16:15.830      },
00:16:15.830      {
00:16:15.830        "name": "pt3",
00:16:15.830        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:15.830        "is_configured": true,
00:16:15.830        "data_offset": 2048,
00:16:15.830        "data_size": 63488
00:16:15.830      },
00:16:15.830      {
00:16:15.830        "name": "pt4",
00:16:15.830        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:15.830        "is_configured": true,
00:16:15.830        "data_offset": 2048,
00:16:15.830        "data_size": 63488
00:16:15.830      }
00:16:15.830    ]
00:16:15.830  }'
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:15.830   11:37:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.396   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:16:16.396   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:16:16.396   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.397  [2024-12-16 11:37:42.244188] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:16:16.397    "name": "raid_bdev1",
00:16:16.397    "aliases": [
00:16:16.397      "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b"
00:16:16.397    ],
00:16:16.397    "product_name": "Raid Volume",
00:16:16.397    "block_size": 512,
00:16:16.397    "num_blocks": 190464,
00:16:16.397    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:16.397    "assigned_rate_limits": {
00:16:16.397      "rw_ios_per_sec": 0,
00:16:16.397      "rw_mbytes_per_sec": 0,
00:16:16.397      "r_mbytes_per_sec": 0,
00:16:16.397      "w_mbytes_per_sec": 0
00:16:16.397    },
00:16:16.397    "claimed": false,
00:16:16.397    "zoned": false,
00:16:16.397    "supported_io_types": {
00:16:16.397      "read": true,
00:16:16.397      "write": true,
00:16:16.397      "unmap": false,
00:16:16.397      "flush": false,
00:16:16.397      "reset": true,
00:16:16.397      "nvme_admin": false,
00:16:16.397      "nvme_io": false,
00:16:16.397      "nvme_io_md": false,
00:16:16.397      "write_zeroes": true,
00:16:16.397      "zcopy": false,
00:16:16.397      "get_zone_info": false,
00:16:16.397      "zone_management": false,
00:16:16.397      "zone_append": false,
00:16:16.397      "compare": false,
00:16:16.397      "compare_and_write": false,
00:16:16.397      "abort": false,
00:16:16.397      "seek_hole": false,
00:16:16.397      "seek_data": false,
00:16:16.397      "copy": false,
00:16:16.397      "nvme_iov_md": false
00:16:16.397    },
00:16:16.397    "driver_specific": {
00:16:16.397      "raid": {
00:16:16.397        "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:16.397        "strip_size_kb": 64,
00:16:16.397        "state": "online",
00:16:16.397        "raid_level": "raid5f",
00:16:16.397        "superblock": true,
00:16:16.397        "num_base_bdevs": 4,
00:16:16.397        "num_base_bdevs_discovered": 4,
00:16:16.397        "num_base_bdevs_operational": 4,
00:16:16.397        "base_bdevs_list": [
00:16:16.397          {
00:16:16.397            "name": "pt1",
00:16:16.397            "uuid": "00000000-0000-0000-0000-000000000001",
00:16:16.397            "is_configured": true,
00:16:16.397            "data_offset": 2048,
00:16:16.397            "data_size": 63488
00:16:16.397          },
00:16:16.397          {
00:16:16.397            "name": "pt2",
00:16:16.397            "uuid": "00000000-0000-0000-0000-000000000002",
00:16:16.397            "is_configured": true,
00:16:16.397            "data_offset": 2048,
00:16:16.397            "data_size": 63488
00:16:16.397          },
00:16:16.397          {
00:16:16.397            "name": "pt3",
00:16:16.397            "uuid": "00000000-0000-0000-0000-000000000003",
00:16:16.397            "is_configured": true,
00:16:16.397            "data_offset": 2048,
00:16:16.397            "data_size": 63488
00:16:16.397          },
00:16:16.397          {
00:16:16.397            "name": "pt4",
00:16:16.397            "uuid": "00000000-0000-0000-0000-000000000004",
00:16:16.397            "is_configured": true,
00:16:16.397            "data_offset": 2048,
00:16:16.397            "data_size": 63488
00:16:16.397          }
00:16:16.397        ]
00:16:16.397      }
00:16:16.397    }
00:16:16.397  }'
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:16:16.397  pt2
00:16:16.397  pt3
00:16:16.397  pt4'
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512   '
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:16.397   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.397    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512   '
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512    == \5\1\2\ \ \  ]]
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:16:16.656  [2024-12-16 11:37:42.551683] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4c018d97-6ff1-427a-a563-a9cdd0aa1a0b '!=' 4c018d97-6ff1-427a-a563-a9cdd0aa1a0b ']'
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.656  [2024-12-16 11:37:42.599450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:16.656    11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:16.656    "name": "raid_bdev1",
00:16:16.656    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:16.656    "strip_size_kb": 64,
00:16:16.656    "state": "online",
00:16:16.656    "raid_level": "raid5f",
00:16:16.656    "superblock": true,
00:16:16.656    "num_base_bdevs": 4,
00:16:16.656    "num_base_bdevs_discovered": 3,
00:16:16.656    "num_base_bdevs_operational": 3,
00:16:16.656    "base_bdevs_list": [
00:16:16.656      {
00:16:16.656        "name": null,
00:16:16.656        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:16.656        "is_configured": false,
00:16:16.656        "data_offset": 0,
00:16:16.656        "data_size": 63488
00:16:16.656      },
00:16:16.656      {
00:16:16.656        "name": "pt2",
00:16:16.656        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:16.656        "is_configured": true,
00:16:16.656        "data_offset": 2048,
00:16:16.656        "data_size": 63488
00:16:16.656      },
00:16:16.656      {
00:16:16.656        "name": "pt3",
00:16:16.656        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:16.656        "is_configured": true,
00:16:16.656        "data_offset": 2048,
00:16:16.656        "data_size": 63488
00:16:16.656      },
00:16:16.656      {
00:16:16.656        "name": "pt4",
00:16:16.656        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:16.656        "is_configured": true,
00:16:16.656        "data_offset": 2048,
00:16:16.656        "data_size": 63488
00:16:16.656      }
00:16:16.656    ]
00:16:16.656  }'
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:16.656   11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.223   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:16:17.223   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.223   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.223  [2024-12-16 11:37:43.062639] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:17.223  [2024-12-16 11:37:43.062732] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:17.223  [2024-12-16 11:37:43.062847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:17.223  [2024-12-16 11:37:43.062961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:17.223  [2024-12-16 11:37:43.063026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:16:17.223   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.223    11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:17.223    11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:16:17.223    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.224    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.224    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.224  [2024-12-16 11:37:43.162398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:17.224  [2024-12-16 11:37:43.162500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:17.224  [2024-12-16 11:37:43.162544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:16:17.224  [2024-12-16 11:37:43.162577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:17.224  [2024-12-16 11:37:43.164752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:17.224  [2024-12-16 11:37:43.164824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:17.224  [2024-12-16 11:37:43.164931] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:16:17.224  [2024-12-16 11:37:43.164985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:17.224  pt2
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:17.224    11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:17.224    11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:17.224    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.224    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.224    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:17.224    "name": "raid_bdev1",
00:16:17.224    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:17.224    "strip_size_kb": 64,
00:16:17.224    "state": "configuring",
00:16:17.224    "raid_level": "raid5f",
00:16:17.224    "superblock": true,
00:16:17.224    "num_base_bdevs": 4,
00:16:17.224    "num_base_bdevs_discovered": 1,
00:16:17.224    "num_base_bdevs_operational": 3,
00:16:17.224    "base_bdevs_list": [
00:16:17.224      {
00:16:17.224        "name": null,
00:16:17.224        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:17.224        "is_configured": false,
00:16:17.224        "data_offset": 2048,
00:16:17.224        "data_size": 63488
00:16:17.224      },
00:16:17.224      {
00:16:17.224        "name": "pt2",
00:16:17.224        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:17.224        "is_configured": true,
00:16:17.224        "data_offset": 2048,
00:16:17.224        "data_size": 63488
00:16:17.224      },
00:16:17.224      {
00:16:17.224        "name": null,
00:16:17.224        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:17.224        "is_configured": false,
00:16:17.224        "data_offset": 2048,
00:16:17.224        "data_size": 63488
00:16:17.224      },
00:16:17.224      {
00:16:17.224        "name": null,
00:16:17.224        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:17.224        "is_configured": false,
00:16:17.224        "data_offset": 2048,
00:16:17.224        "data_size": 63488
00:16:17.224      }
00:16:17.224    ]
00:16:17.224  }'
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:17.224   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ ))
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.794  [2024-12-16 11:37:43.629656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:17.794  [2024-12-16 11:37:43.629781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:17.794  [2024-12-16 11:37:43.629807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:16:17.794  [2024-12-16 11:37:43.629820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:17.794  [2024-12-16 11:37:43.630234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:17.794  [2024-12-16 11:37:43.630270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:17.794  [2024-12-16 11:37:43.630351] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3
00:16:17.794  [2024-12-16 11:37:43.630384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:17.794  pt3
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:17.794   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:17.794    11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:17.795    11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:17.795    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:17.795    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:17.795    11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:17.795   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:17.795    "name": "raid_bdev1",
00:16:17.795    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:17.795    "strip_size_kb": 64,
00:16:17.795    "state": "configuring",
00:16:17.795    "raid_level": "raid5f",
00:16:17.795    "superblock": true,
00:16:17.795    "num_base_bdevs": 4,
00:16:17.795    "num_base_bdevs_discovered": 2,
00:16:17.795    "num_base_bdevs_operational": 3,
00:16:17.795    "base_bdevs_list": [
00:16:17.795      {
00:16:17.795        "name": null,
00:16:17.795        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:17.795        "is_configured": false,
00:16:17.795        "data_offset": 2048,
00:16:17.795        "data_size": 63488
00:16:17.795      },
00:16:17.795      {
00:16:17.795        "name": "pt2",
00:16:17.795        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:17.795        "is_configured": true,
00:16:17.795        "data_offset": 2048,
00:16:17.795        "data_size": 63488
00:16:17.795      },
00:16:17.795      {
00:16:17.795        "name": "pt3",
00:16:17.795        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:17.795        "is_configured": true,
00:16:17.795        "data_offset": 2048,
00:16:17.795        "data_size": 63488
00:16:17.795      },
00:16:17.795      {
00:16:17.795        "name": null,
00:16:17.795        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:17.795        "is_configured": false,
00:16:17.795        "data_offset": 2048,
00:16:17.795        "data_size": 63488
00:16:17.795      }
00:16:17.795    ]
00:16:17.795  }'
00:16:17.795   11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:17.795   11:37:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ ))
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.055  [2024-12-16 11:37:44.084846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:16:18.055  [2024-12-16 11:37:44.084980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:18.055  [2024-12-16 11:37:44.085024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:16:18.055  [2024-12-16 11:37:44.085059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:18.055  [2024-12-16 11:37:44.085524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:18.055  [2024-12-16 11:37:44.085597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:16:18.055  [2024-12-16 11:37:44.085709] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:16:18.055  [2024-12-16 11:37:44.085763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:16:18.055  [2024-12-16 11:37:44.085893] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:16:18.055  [2024-12-16 11:37:44.085932] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:18.055  [2024-12-16 11:37:44.086199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:16:18.055  [2024-12-16 11:37:44.086830] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:16:18.055  [2024-12-16 11:37:44.086898] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:16:18.055  [2024-12-16 11:37:44.087197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:18.055  pt4
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:18.055   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:18.055    11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:18.055    11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:18.055    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:18.055    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.055    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:18.315   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:18.315    "name": "raid_bdev1",
00:16:18.315    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:18.315    "strip_size_kb": 64,
00:16:18.315    "state": "online",
00:16:18.315    "raid_level": "raid5f",
00:16:18.315    "superblock": true,
00:16:18.315    "num_base_bdevs": 4,
00:16:18.315    "num_base_bdevs_discovered": 3,
00:16:18.315    "num_base_bdevs_operational": 3,
00:16:18.315    "base_bdevs_list": [
00:16:18.315      {
00:16:18.315        "name": null,
00:16:18.315        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:18.315        "is_configured": false,
00:16:18.315        "data_offset": 2048,
00:16:18.315        "data_size": 63488
00:16:18.315      },
00:16:18.315      {
00:16:18.315        "name": "pt2",
00:16:18.315        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:18.315        "is_configured": true,
00:16:18.315        "data_offset": 2048,
00:16:18.315        "data_size": 63488
00:16:18.315      },
00:16:18.315      {
00:16:18.315        "name": "pt3",
00:16:18.315        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:18.315        "is_configured": true,
00:16:18.315        "data_offset": 2048,
00:16:18.315        "data_size": 63488
00:16:18.315      },
00:16:18.315      {
00:16:18.315        "name": "pt4",
00:16:18.315        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:18.315        "is_configured": true,
00:16:18.315        "data_offset": 2048,
00:16:18.315        "data_size": 63488
00:16:18.315      }
00:16:18.315    ]
00:16:18.315  }'
00:16:18.315   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:18.315   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.575  [2024-12-16 11:37:44.508213] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:18.575  [2024-12-16 11:37:44.508303] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:18.575  [2024-12-16 11:37:44.508415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:18.575  [2024-12-16 11:37:44.508559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:18.575  [2024-12-16 11:37:44.508613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:18.575    11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:16:18.575    11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:18.575    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:18.575    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.575    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']'
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3
00:16:18.575   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.576  [2024-12-16 11:37:44.564097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:18.576  [2024-12-16 11:37:44.564206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:18.576  [2024-12-16 11:37:44.564249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080
00:16:18.576  [2024-12-16 11:37:44.564286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:18.576  [2024-12-16 11:37:44.566654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:18.576  [2024-12-16 11:37:44.566729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:18.576  [2024-12-16 11:37:44.566833] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:16:18.576  [2024-12-16 11:37:44.566903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:18.576  [2024-12-16 11:37:44.567055] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:16:18.576  [2024-12-16 11:37:44.567119] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:18.576  [2024-12-16 11:37:44.567164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:16:18.576  [2024-12-16 11:37:44.567237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:18.576  [2024-12-16 11:37:44.567422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:18.576  pt1
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']'
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:18.576    11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:18.576    11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:18.576    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:18.576    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:18.576    11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:18.576    "name": "raid_bdev1",
00:16:18.576    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:18.576    "strip_size_kb": 64,
00:16:18.576    "state": "configuring",
00:16:18.576    "raid_level": "raid5f",
00:16:18.576    "superblock": true,
00:16:18.576    "num_base_bdevs": 4,
00:16:18.576    "num_base_bdevs_discovered": 2,
00:16:18.576    "num_base_bdevs_operational": 3,
00:16:18.576    "base_bdevs_list": [
00:16:18.576      {
00:16:18.576        "name": null,
00:16:18.576        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:18.576        "is_configured": false,
00:16:18.576        "data_offset": 2048,
00:16:18.576        "data_size": 63488
00:16:18.576      },
00:16:18.576      {
00:16:18.576        "name": "pt2",
00:16:18.576        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:18.576        "is_configured": true,
00:16:18.576        "data_offset": 2048,
00:16:18.576        "data_size": 63488
00:16:18.576      },
00:16:18.576      {
00:16:18.576        "name": "pt3",
00:16:18.576        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:18.576        "is_configured": true,
00:16:18.576        "data_offset": 2048,
00:16:18.576        "data_size": 63488
00:16:18.576      },
00:16:18.576      {
00:16:18.576        "name": null,
00:16:18.576        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:18.576        "is_configured": false,
00:16:18.576        "data_offset": 2048,
00:16:18.576        "data_size": 63488
00:16:18.576      }
00:16:18.576    ]
00:16:18.576  }'
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:18.576   11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]]
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.146  [2024-12-16 11:37:45.071241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:16:19.146  [2024-12-16 11:37:45.071373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:19.146  [2024-12-16 11:37:45.071440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:16:19.146  [2024-12-16 11:37:45.071482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:19.146  [2024-12-16 11:37:45.071963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:19.146  [2024-12-16 11:37:45.072035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:16:19.146  [2024-12-16 11:37:45.072147] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4
00:16:19.146  [2024-12-16 11:37:45.072205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:16:19.146  [2024-12-16 11:37:45.072336] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:16:19.146  [2024-12-16 11:37:45.072384] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:19.146  [2024-12-16 11:37:45.072674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:16:19.146  [2024-12-16 11:37:45.073304] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:16:19.146  [2024-12-16 11:37:45.073360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:16:19.146  [2024-12-16 11:37:45.073617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:19.146  pt4
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.146    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:19.146    "name": "raid_bdev1",
00:16:19.146    "uuid": "4c018d97-6ff1-427a-a563-a9cdd0aa1a0b",
00:16:19.146    "strip_size_kb": 64,
00:16:19.146    "state": "online",
00:16:19.146    "raid_level": "raid5f",
00:16:19.146    "superblock": true,
00:16:19.146    "num_base_bdevs": 4,
00:16:19.146    "num_base_bdevs_discovered": 3,
00:16:19.146    "num_base_bdevs_operational": 3,
00:16:19.146    "base_bdevs_list": [
00:16:19.146      {
00:16:19.146        "name": null,
00:16:19.146        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:19.146        "is_configured": false,
00:16:19.146        "data_offset": 2048,
00:16:19.146        "data_size": 63488
00:16:19.146      },
00:16:19.146      {
00:16:19.146        "name": "pt2",
00:16:19.146        "uuid": "00000000-0000-0000-0000-000000000002",
00:16:19.146        "is_configured": true,
00:16:19.146        "data_offset": 2048,
00:16:19.146        "data_size": 63488
00:16:19.146      },
00:16:19.146      {
00:16:19.146        "name": "pt3",
00:16:19.146        "uuid": "00000000-0000-0000-0000-000000000003",
00:16:19.146        "is_configured": true,
00:16:19.146        "data_offset": 2048,
00:16:19.146        "data_size": 63488
00:16:19.146      },
00:16:19.146      {
00:16:19.146        "name": "pt4",
00:16:19.146        "uuid": "00000000-0000-0000-0000-000000000004",
00:16:19.146        "is_configured": true,
00:16:19.146        "data_offset": 2048,
00:16:19.146        "data_size": 63488
00:16:19.146      }
00:16:19.146    ]
00:16:19.146  }'
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:19.146   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.714  [2024-12-16 11:37:45.598619] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4c018d97-6ff1-427a-a563-a9cdd0aa1a0b '!=' 4c018d97-6ff1-427a-a563-a9cdd0aa1a0b ']'
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94919
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94919 ']'
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94919
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:16:19.714    11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94919
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94919'
00:16:19.714  killing process with pid 94919
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94919
00:16:19.714  [2024-12-16 11:37:45.684314] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:19.714  [2024-12-16 11:37:45.684406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:19.714   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94919
00:16:19.714  [2024-12-16 11:37:45.684508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:19.714  [2024-12-16 11:37:45.684520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:16:19.714  [2024-12-16 11:37:45.728632] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:19.972   11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0
00:16:19.972  
00:16:19.972  real	0m7.102s
00:16:19.972  user	0m11.915s
00:16:19.972  sys	0m1.545s
00:16:19.972  ************************************
00:16:19.972  END TEST raid5f_superblock_test
00:16:19.972  ************************************
00:16:19.972   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:16:19.972   11:37:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x
00:16:19.972   11:37:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']'
00:16:19.972   11:37:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true
00:16:19.972   11:37:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:16:19.972   11:37:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:16:19.972   11:37:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:16:20.232  ************************************
00:16:20.232  START TEST raid5f_rebuild_test
00:16:20.232  ************************************
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:20.232    11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']'
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']'
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64'
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']'
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95393
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95393
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95393 ']'
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:20.232  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable
00:16:20.232   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:20.232  [2024-12-16 11:37:46.140447] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:16:20.232  [2024-12-16 11:37:46.140687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95393 ]
00:16:20.232  I/O size of 3145728 is greater than zero copy threshold (65536).
00:16:20.232  Zero copy mechanism will not be used.
00:16:20.492  [2024-12-16 11:37:46.300788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:20.492  [2024-12-16 11:37:46.347391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:16:20.492  [2024-12-16 11:37:46.390568] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:20.492  [2024-12-16 11:37:46.390683] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.060  BaseBdev1_malloc
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.060   11:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.060  [2024-12-16 11:37:47.005000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:16:21.060  [2024-12-16 11:37:47.005113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:21.060  [2024-12-16 11:37:47.005183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:16:21.060  [2024-12-16 11:37:47.005226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:21.060  [2024-12-16 11:37:47.007566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:21.060  [2024-12-16 11:37:47.007641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:16:21.060  BaseBdev1
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.060  BaseBdev2_malloc
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.060  [2024-12-16 11:37:47.041869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:16:21.060  [2024-12-16 11:37:47.041974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:21.060  [2024-12-16 11:37:47.042002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:16:21.060  [2024-12-16 11:37:47.042012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:21.060  [2024-12-16 11:37:47.044267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:21.060  [2024-12-16 11:37:47.044307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:16:21.060  BaseBdev2
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.060  BaseBdev3_malloc
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.060   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.060  [2024-12-16 11:37:47.070262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:16:21.060  [2024-12-16 11:37:47.070350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:21.061  [2024-12-16 11:37:47.070392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:16:21.061  [2024-12-16 11:37:47.070421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:21.061  [2024-12-16 11:37:47.072579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:21.061  [2024-12-16 11:37:47.072646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:16:21.061  BaseBdev3
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.061  BaseBdev4_malloc
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.061  [2024-12-16 11:37:47.098625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:16:21.061  [2024-12-16 11:37:47.098739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:21.061  [2024-12-16 11:37:47.098783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:16:21.061  [2024-12-16 11:37:47.098822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:21.061  [2024-12-16 11:37:47.100977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:21.061  [2024-12-16 11:37:47.101054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:16:21.061  BaseBdev4
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.061  spare_malloc
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.061   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.320  spare_delay
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.320  [2024-12-16 11:37:47.139242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:16:21.320  [2024-12-16 11:37:47.139345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:21.320  [2024-12-16 11:37:47.139405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:16:21.320  [2024-12-16 11:37:47.139416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:21.320  [2024-12-16 11:37:47.141619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:21.320  [2024-12-16 11:37:47.141646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:16:21.320  spare
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.320  [2024-12-16 11:37:47.151317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:21.320  [2024-12-16 11:37:47.153306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:21.320  [2024-12-16 11:37:47.153429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:21.320  [2024-12-16 11:37:47.153492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:16:21.320  [2024-12-16 11:37:47.153613] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:16:21.320  [2024-12-16 11:37:47.153653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:16:21.320  [2024-12-16 11:37:47.153980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:16:21.320  [2024-12-16 11:37:47.154518] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:16:21.320  [2024-12-16 11:37:47.154589] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:16:21.320  [2024-12-16 11:37:47.154777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:21.320    11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:21.320    11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:21.320    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.320    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.320    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.320   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:21.320    "name": "raid_bdev1",
00:16:21.320    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:21.320    "strip_size_kb": 64,
00:16:21.320    "state": "online",
00:16:21.320    "raid_level": "raid5f",
00:16:21.320    "superblock": false,
00:16:21.320    "num_base_bdevs": 4,
00:16:21.320    "num_base_bdevs_discovered": 4,
00:16:21.320    "num_base_bdevs_operational": 4,
00:16:21.320    "base_bdevs_list": [
00:16:21.320      {
00:16:21.320        "name": "BaseBdev1",
00:16:21.320        "uuid": "d9cffcca-0a8d-57fb-9722-c9799739b70a",
00:16:21.320        "is_configured": true,
00:16:21.320        "data_offset": 0,
00:16:21.320        "data_size": 65536
00:16:21.320      },
00:16:21.320      {
00:16:21.320        "name": "BaseBdev2",
00:16:21.320        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:21.320        "is_configured": true,
00:16:21.320        "data_offset": 0,
00:16:21.320        "data_size": 65536
00:16:21.320      },
00:16:21.320      {
00:16:21.320        "name": "BaseBdev3",
00:16:21.320        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:21.320        "is_configured": true,
00:16:21.320        "data_offset": 0,
00:16:21.320        "data_size": 65536
00:16:21.320      },
00:16:21.320      {
00:16:21.321        "name": "BaseBdev4",
00:16:21.321        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:21.321        "is_configured": true,
00:16:21.321        "data_offset": 0,
00:16:21.321        "data_size": 65536
00:16:21.321      }
00:16:21.321    ]
00:16:21.321  }'
00:16:21.321   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:21.321   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.579    11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:16:21.579    11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:16:21.579    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.579    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.579  [2024-12-16 11:37:47.616036] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:21.579    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608
00:16:21.837    11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:16:21.837    11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:21.837    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:21.837    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:21.837    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:16:21.837   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:16:21.837  [2024-12-16 11:37:47.899490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:16:22.096  /dev/nbd0
00:16:22.096    11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:16:22.096  1+0 records in
00:16:22.096  1+0 records out
00:16:22.096  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474053 s, 8.6 MB/s
00:16:22.096    11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']'
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192
00:16:22.096   11:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct
00:16:22.355  512+0 records in
00:16:22.355  512+0 records out
00:16:22.355  100663296 bytes (101 MB, 96 MiB) copied, 0.434115 s, 232 MB/s
00:16:22.355   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:16:22.355   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:16:22.355   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:16:22.355   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:16:22.355   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:16:22.355   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:16:22.355   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:16:22.615    11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:16:22.615  [2024-12-16 11:37:48.618630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:22.615  [2024-12-16 11:37:48.634698] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:22.615   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:22.615    11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:22.615    11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:22.615    11:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:22.615    11:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:22.615    11:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:22.874   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:22.874    "name": "raid_bdev1",
00:16:22.874    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:22.874    "strip_size_kb": 64,
00:16:22.874    "state": "online",
00:16:22.874    "raid_level": "raid5f",
00:16:22.874    "superblock": false,
00:16:22.874    "num_base_bdevs": 4,
00:16:22.874    "num_base_bdevs_discovered": 3,
00:16:22.874    "num_base_bdevs_operational": 3,
00:16:22.874    "base_bdevs_list": [
00:16:22.874      {
00:16:22.874        "name": null,
00:16:22.874        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:22.874        "is_configured": false,
00:16:22.874        "data_offset": 0,
00:16:22.874        "data_size": 65536
00:16:22.874      },
00:16:22.874      {
00:16:22.874        "name": "BaseBdev2",
00:16:22.874        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:22.874        "is_configured": true,
00:16:22.874        "data_offset": 0,
00:16:22.874        "data_size": 65536
00:16:22.874      },
00:16:22.874      {
00:16:22.874        "name": "BaseBdev3",
00:16:22.874        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:22.874        "is_configured": true,
00:16:22.874        "data_offset": 0,
00:16:22.874        "data_size": 65536
00:16:22.874      },
00:16:22.874      {
00:16:22.874        "name": "BaseBdev4",
00:16:22.874        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:22.874        "is_configured": true,
00:16:22.874        "data_offset": 0,
00:16:22.874        "data_size": 65536
00:16:22.874      }
00:16:22.874    ]
00:16:22.875  }'
00:16:22.875   11:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:22.875   11:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:23.133   11:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:16:23.133   11:37:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:23.133   11:37:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:23.133  [2024-12-16 11:37:49.078073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:23.133  [2024-12-16 11:37:49.084606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0
00:16:23.133   11:37:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:23.133   11:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1
00:16:23.133  [2024-12-16 11:37:49.087509] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:16:24.070   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:24.070   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:24.070   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:24.070   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:24.070   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:24.070    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:24.070    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:24.070    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:24.070    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:24.070    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:24.330    "name": "raid_bdev1",
00:16:24.330    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:24.330    "strip_size_kb": 64,
00:16:24.330    "state": "online",
00:16:24.330    "raid_level": "raid5f",
00:16:24.330    "superblock": false,
00:16:24.330    "num_base_bdevs": 4,
00:16:24.330    "num_base_bdevs_discovered": 4,
00:16:24.330    "num_base_bdevs_operational": 4,
00:16:24.330    "process": {
00:16:24.330      "type": "rebuild",
00:16:24.330      "target": "spare",
00:16:24.330      "progress": {
00:16:24.330        "blocks": 19200,
00:16:24.330        "percent": 9
00:16:24.330      }
00:16:24.330    },
00:16:24.330    "base_bdevs_list": [
00:16:24.330      {
00:16:24.330        "name": "spare",
00:16:24.330        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:24.330        "is_configured": true,
00:16:24.330        "data_offset": 0,
00:16:24.330        "data_size": 65536
00:16:24.330      },
00:16:24.330      {
00:16:24.330        "name": "BaseBdev2",
00:16:24.330        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:24.330        "is_configured": true,
00:16:24.330        "data_offset": 0,
00:16:24.330        "data_size": 65536
00:16:24.330      },
00:16:24.330      {
00:16:24.330        "name": "BaseBdev3",
00:16:24.330        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:24.330        "is_configured": true,
00:16:24.330        "data_offset": 0,
00:16:24.330        "data_size": 65536
00:16:24.330      },
00:16:24.330      {
00:16:24.330        "name": "BaseBdev4",
00:16:24.330        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:24.330        "is_configured": true,
00:16:24.330        "data_offset": 0,
00:16:24.330        "data_size": 65536
00:16:24.330      }
00:16:24.330    ]
00:16:24.330  }'
00:16:24.330    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:24.330    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:24.330  [2024-12-16 11:37:50.230614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:24.330  [2024-12-16 11:37:50.295934] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:16:24.330  [2024-12-16 11:37:50.296002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:24.330  [2024-12-16 11:37:50.296025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:24.330  [2024-12-16 11:37:50.296034] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:24.330   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:24.331    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:24.331    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:24.331    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:24.331    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:24.331    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:24.331    "name": "raid_bdev1",
00:16:24.331    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:24.331    "strip_size_kb": 64,
00:16:24.331    "state": "online",
00:16:24.331    "raid_level": "raid5f",
00:16:24.331    "superblock": false,
00:16:24.331    "num_base_bdevs": 4,
00:16:24.331    "num_base_bdevs_discovered": 3,
00:16:24.331    "num_base_bdevs_operational": 3,
00:16:24.331    "base_bdevs_list": [
00:16:24.331      {
00:16:24.331        "name": null,
00:16:24.331        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:24.331        "is_configured": false,
00:16:24.331        "data_offset": 0,
00:16:24.331        "data_size": 65536
00:16:24.331      },
00:16:24.331      {
00:16:24.331        "name": "BaseBdev2",
00:16:24.331        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:24.331        "is_configured": true,
00:16:24.331        "data_offset": 0,
00:16:24.331        "data_size": 65536
00:16:24.331      },
00:16:24.331      {
00:16:24.331        "name": "BaseBdev3",
00:16:24.331        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:24.331        "is_configured": true,
00:16:24.331        "data_offset": 0,
00:16:24.331        "data_size": 65536
00:16:24.331      },
00:16:24.331      {
00:16:24.331        "name": "BaseBdev4",
00:16:24.331        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:24.331        "is_configured": true,
00:16:24.331        "data_offset": 0,
00:16:24.331        "data_size": 65536
00:16:24.331      }
00:16:24.331    ]
00:16:24.331  }'
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:24.331   11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:24.900    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:24.900    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:24.900    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:24.900    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:24.900    11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:24.900    "name": "raid_bdev1",
00:16:24.900    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:24.900    "strip_size_kb": 64,
00:16:24.900    "state": "online",
00:16:24.900    "raid_level": "raid5f",
00:16:24.900    "superblock": false,
00:16:24.900    "num_base_bdevs": 4,
00:16:24.900    "num_base_bdevs_discovered": 3,
00:16:24.900    "num_base_bdevs_operational": 3,
00:16:24.900    "base_bdevs_list": [
00:16:24.900      {
00:16:24.900        "name": null,
00:16:24.900        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:24.900        "is_configured": false,
00:16:24.900        "data_offset": 0,
00:16:24.900        "data_size": 65536
00:16:24.900      },
00:16:24.900      {
00:16:24.900        "name": "BaseBdev2",
00:16:24.900        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:24.900        "is_configured": true,
00:16:24.900        "data_offset": 0,
00:16:24.900        "data_size": 65536
00:16:24.900      },
00:16:24.900      {
00:16:24.900        "name": "BaseBdev3",
00:16:24.900        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:24.900        "is_configured": true,
00:16:24.900        "data_offset": 0,
00:16:24.900        "data_size": 65536
00:16:24.900      },
00:16:24.900      {
00:16:24.900        "name": "BaseBdev4",
00:16:24.900        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:24.900        "is_configured": true,
00:16:24.900        "data_offset": 0,
00:16:24.900        "data_size": 65536
00:16:24.900      }
00:16:24.900    ]
00:16:24.900  }'
00:16:24.900    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:16:24.900    11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:24.900  [2024-12-16 11:37:50.908688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:24.900  [2024-12-16 11:37:50.912138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680
00:16:24.900  [2024-12-16 11:37:50.914384] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:24.900   11:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1
00:16:26.318   11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:26.318   11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:26.318   11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:26.318   11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:26.318   11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:26.318    11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:26.318    11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:26.318    11:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:26.318    11:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:26.318    11:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:26.318   11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:26.318    "name": "raid_bdev1",
00:16:26.318    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:26.318    "strip_size_kb": 64,
00:16:26.318    "state": "online",
00:16:26.318    "raid_level": "raid5f",
00:16:26.318    "superblock": false,
00:16:26.318    "num_base_bdevs": 4,
00:16:26.318    "num_base_bdevs_discovered": 4,
00:16:26.318    "num_base_bdevs_operational": 4,
00:16:26.318    "process": {
00:16:26.318      "type": "rebuild",
00:16:26.318      "target": "spare",
00:16:26.318      "progress": {
00:16:26.318        "blocks": 19200,
00:16:26.318        "percent": 9
00:16:26.318      }
00:16:26.318    },
00:16:26.318    "base_bdevs_list": [
00:16:26.318      {
00:16:26.318        "name": "spare",
00:16:26.318        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:26.318        "is_configured": true,
00:16:26.318        "data_offset": 0,
00:16:26.318        "data_size": 65536
00:16:26.318      },
00:16:26.318      {
00:16:26.318        "name": "BaseBdev2",
00:16:26.318        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:26.318        "is_configured": true,
00:16:26.318        "data_offset": 0,
00:16:26.318        "data_size": 65536
00:16:26.318      },
00:16:26.318      {
00:16:26.318        "name": "BaseBdev3",
00:16:26.318        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:26.318        "is_configured": true,
00:16:26.318        "data_offset": 0,
00:16:26.318        "data_size": 65536
00:16:26.318      },
00:16:26.318      {
00:16:26.318        "name": "BaseBdev4",
00:16:26.318        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:26.318        "is_configured": true,
00:16:26.318        "data_offset": 0,
00:16:26.318        "data_size": 65536
00:16:26.318      }
00:16:26.318    ]
00:16:26.318  }'
00:16:26.318    11:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:26.318    11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']'
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']'
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=525
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:26.318   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:26.318    11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:26.318    11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:26.318    11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:26.318    11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:26.319    11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:26.319   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:26.319    "name": "raid_bdev1",
00:16:26.319    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:26.319    "strip_size_kb": 64,
00:16:26.319    "state": "online",
00:16:26.319    "raid_level": "raid5f",
00:16:26.319    "superblock": false,
00:16:26.319    "num_base_bdevs": 4,
00:16:26.319    "num_base_bdevs_discovered": 4,
00:16:26.319    "num_base_bdevs_operational": 4,
00:16:26.319    "process": {
00:16:26.319      "type": "rebuild",
00:16:26.319      "target": "spare",
00:16:26.319      "progress": {
00:16:26.319        "blocks": 21120,
00:16:26.319        "percent": 10
00:16:26.319      }
00:16:26.319    },
00:16:26.319    "base_bdevs_list": [
00:16:26.319      {
00:16:26.319        "name": "spare",
00:16:26.319        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:26.319        "is_configured": true,
00:16:26.319        "data_offset": 0,
00:16:26.319        "data_size": 65536
00:16:26.319      },
00:16:26.319      {
00:16:26.319        "name": "BaseBdev2",
00:16:26.319        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:26.319        "is_configured": true,
00:16:26.319        "data_offset": 0,
00:16:26.319        "data_size": 65536
00:16:26.319      },
00:16:26.319      {
00:16:26.319        "name": "BaseBdev3",
00:16:26.319        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:26.319        "is_configured": true,
00:16:26.319        "data_offset": 0,
00:16:26.319        "data_size": 65536
00:16:26.319      },
00:16:26.319      {
00:16:26.319        "name": "BaseBdev4",
00:16:26.319        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:26.319        "is_configured": true,
00:16:26.319        "data_offset": 0,
00:16:26.319        "data_size": 65536
00:16:26.319      }
00:16:26.319    ]
00:16:26.319  }'
00:16:26.319    11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:26.319   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:26.319    11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:26.319   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:26.319   11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:27.256   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:27.256   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:27.256   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:27.256   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:27.256   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:27.256   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:27.256    11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:27.256    11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:27.256    11:37:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:27.256    11:37:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:27.256    11:37:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:27.256   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:27.256    "name": "raid_bdev1",
00:16:27.256    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:27.256    "strip_size_kb": 64,
00:16:27.256    "state": "online",
00:16:27.256    "raid_level": "raid5f",
00:16:27.256    "superblock": false,
00:16:27.256    "num_base_bdevs": 4,
00:16:27.256    "num_base_bdevs_discovered": 4,
00:16:27.256    "num_base_bdevs_operational": 4,
00:16:27.256    "process": {
00:16:27.256      "type": "rebuild",
00:16:27.256      "target": "spare",
00:16:27.256      "progress": {
00:16:27.256        "blocks": 42240,
00:16:27.256        "percent": 21
00:16:27.256      }
00:16:27.256    },
00:16:27.256    "base_bdevs_list": [
00:16:27.256      {
00:16:27.256        "name": "spare",
00:16:27.256        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:27.256        "is_configured": true,
00:16:27.256        "data_offset": 0,
00:16:27.256        "data_size": 65536
00:16:27.256      },
00:16:27.256      {
00:16:27.256        "name": "BaseBdev2",
00:16:27.256        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:27.256        "is_configured": true,
00:16:27.256        "data_offset": 0,
00:16:27.256        "data_size": 65536
00:16:27.256      },
00:16:27.256      {
00:16:27.256        "name": "BaseBdev3",
00:16:27.256        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:27.256        "is_configured": true,
00:16:27.256        "data_offset": 0,
00:16:27.256        "data_size": 65536
00:16:27.256      },
00:16:27.257      {
00:16:27.257        "name": "BaseBdev4",
00:16:27.257        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:27.257        "is_configured": true,
00:16:27.257        "data_offset": 0,
00:16:27.257        "data_size": 65536
00:16:27.257      }
00:16:27.257    ]
00:16:27.257  }'
00:16:27.257    11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:27.257   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:27.257    11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:27.516   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:27.516   11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:28.452    11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:28.452    11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:28.452    11:37:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:28.452    11:37:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:28.452    11:37:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:28.452    "name": "raid_bdev1",
00:16:28.452    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:28.452    "strip_size_kb": 64,
00:16:28.452    "state": "online",
00:16:28.452    "raid_level": "raid5f",
00:16:28.452    "superblock": false,
00:16:28.452    "num_base_bdevs": 4,
00:16:28.452    "num_base_bdevs_discovered": 4,
00:16:28.452    "num_base_bdevs_operational": 4,
00:16:28.452    "process": {
00:16:28.452      "type": "rebuild",
00:16:28.452      "target": "spare",
00:16:28.452      "progress": {
00:16:28.452        "blocks": 65280,
00:16:28.452        "percent": 33
00:16:28.452      }
00:16:28.452    },
00:16:28.452    "base_bdevs_list": [
00:16:28.452      {
00:16:28.452        "name": "spare",
00:16:28.452        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:28.452        "is_configured": true,
00:16:28.452        "data_offset": 0,
00:16:28.452        "data_size": 65536
00:16:28.452      },
00:16:28.452      {
00:16:28.452        "name": "BaseBdev2",
00:16:28.452        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:28.452        "is_configured": true,
00:16:28.452        "data_offset": 0,
00:16:28.452        "data_size": 65536
00:16:28.452      },
00:16:28.452      {
00:16:28.452        "name": "BaseBdev3",
00:16:28.452        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:28.452        "is_configured": true,
00:16:28.452        "data_offset": 0,
00:16:28.452        "data_size": 65536
00:16:28.452      },
00:16:28.452      {
00:16:28.452        "name": "BaseBdev4",
00:16:28.452        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:28.452        "is_configured": true,
00:16:28.452        "data_offset": 0,
00:16:28.452        "data_size": 65536
00:16:28.452      }
00:16:28.452    ]
00:16:28.452  }'
00:16:28.452    11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:28.452    11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:28.452   11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:29.832   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:29.832   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:29.832   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:29.832   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:29.832   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:29.832   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:29.832    11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:29.832    11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:29.832    11:37:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:29.832    11:37:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:29.832    11:37:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:29.832   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:29.832    "name": "raid_bdev1",
00:16:29.832    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:29.832    "strip_size_kb": 64,
00:16:29.832    "state": "online",
00:16:29.832    "raid_level": "raid5f",
00:16:29.832    "superblock": false,
00:16:29.832    "num_base_bdevs": 4,
00:16:29.832    "num_base_bdevs_discovered": 4,
00:16:29.832    "num_base_bdevs_operational": 4,
00:16:29.832    "process": {
00:16:29.832      "type": "rebuild",
00:16:29.832      "target": "spare",
00:16:29.832      "progress": {
00:16:29.832        "blocks": 86400,
00:16:29.832        "percent": 43
00:16:29.832      }
00:16:29.832    },
00:16:29.832    "base_bdevs_list": [
00:16:29.833      {
00:16:29.833        "name": "spare",
00:16:29.833        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:29.833        "is_configured": true,
00:16:29.833        "data_offset": 0,
00:16:29.833        "data_size": 65536
00:16:29.833      },
00:16:29.833      {
00:16:29.833        "name": "BaseBdev2",
00:16:29.833        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:29.833        "is_configured": true,
00:16:29.833        "data_offset": 0,
00:16:29.833        "data_size": 65536
00:16:29.833      },
00:16:29.833      {
00:16:29.833        "name": "BaseBdev3",
00:16:29.833        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:29.833        "is_configured": true,
00:16:29.833        "data_offset": 0,
00:16:29.833        "data_size": 65536
00:16:29.833      },
00:16:29.833      {
00:16:29.833        "name": "BaseBdev4",
00:16:29.833        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:29.833        "is_configured": true,
00:16:29.833        "data_offset": 0,
00:16:29.833        "data_size": 65536
00:16:29.833      }
00:16:29.833    ]
00:16:29.833  }'
00:16:29.833    11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:29.833   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:29.833    11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:29.833   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:29.833   11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:30.770    11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:30.770    11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:30.770    11:37:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:30.770    11:37:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:30.770    11:37:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:30.770    "name": "raid_bdev1",
00:16:30.770    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:30.770    "strip_size_kb": 64,
00:16:30.770    "state": "online",
00:16:30.770    "raid_level": "raid5f",
00:16:30.770    "superblock": false,
00:16:30.770    "num_base_bdevs": 4,
00:16:30.770    "num_base_bdevs_discovered": 4,
00:16:30.770    "num_base_bdevs_operational": 4,
00:16:30.770    "process": {
00:16:30.770      "type": "rebuild",
00:16:30.770      "target": "spare",
00:16:30.770      "progress": {
00:16:30.770        "blocks": 107520,
00:16:30.770        "percent": 54
00:16:30.770      }
00:16:30.770    },
00:16:30.770    "base_bdevs_list": [
00:16:30.770      {
00:16:30.770        "name": "spare",
00:16:30.770        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:30.770        "is_configured": true,
00:16:30.770        "data_offset": 0,
00:16:30.770        "data_size": 65536
00:16:30.770      },
00:16:30.770      {
00:16:30.770        "name": "BaseBdev2",
00:16:30.770        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:30.770        "is_configured": true,
00:16:30.770        "data_offset": 0,
00:16:30.770        "data_size": 65536
00:16:30.770      },
00:16:30.770      {
00:16:30.770        "name": "BaseBdev3",
00:16:30.770        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:30.770        "is_configured": true,
00:16:30.770        "data_offset": 0,
00:16:30.770        "data_size": 65536
00:16:30.770      },
00:16:30.770      {
00:16:30.770        "name": "BaseBdev4",
00:16:30.770        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:30.770        "is_configured": true,
00:16:30.770        "data_offset": 0,
00:16:30.770        "data_size": 65536
00:16:30.770      }
00:16:30.770    ]
00:16:30.770  }'
00:16:30.770    11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:30.770    11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:30.770   11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:32.148   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:32.148   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:32.149    11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:32.149    11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:32.149    11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:32.149    11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:32.149    11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:32.149    "name": "raid_bdev1",
00:16:32.149    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:32.149    "strip_size_kb": 64,
00:16:32.149    "state": "online",
00:16:32.149    "raid_level": "raid5f",
00:16:32.149    "superblock": false,
00:16:32.149    "num_base_bdevs": 4,
00:16:32.149    "num_base_bdevs_discovered": 4,
00:16:32.149    "num_base_bdevs_operational": 4,
00:16:32.149    "process": {
00:16:32.149      "type": "rebuild",
00:16:32.149      "target": "spare",
00:16:32.149      "progress": {
00:16:32.149        "blocks": 130560,
00:16:32.149        "percent": 66
00:16:32.149      }
00:16:32.149    },
00:16:32.149    "base_bdevs_list": [
00:16:32.149      {
00:16:32.149        "name": "spare",
00:16:32.149        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:32.149        "is_configured": true,
00:16:32.149        "data_offset": 0,
00:16:32.149        "data_size": 65536
00:16:32.149      },
00:16:32.149      {
00:16:32.149        "name": "BaseBdev2",
00:16:32.149        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:32.149        "is_configured": true,
00:16:32.149        "data_offset": 0,
00:16:32.149        "data_size": 65536
00:16:32.149      },
00:16:32.149      {
00:16:32.149        "name": "BaseBdev3",
00:16:32.149        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:32.149        "is_configured": true,
00:16:32.149        "data_offset": 0,
00:16:32.149        "data_size": 65536
00:16:32.149      },
00:16:32.149      {
00:16:32.149        "name": "BaseBdev4",
00:16:32.149        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:32.149        "is_configured": true,
00:16:32.149        "data_offset": 0,
00:16:32.149        "data_size": 65536
00:16:32.149      }
00:16:32.149    ]
00:16:32.149  }'
00:16:32.149    11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:32.149    11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:32.149   11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:33.083   11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:33.083   11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:33.083   11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:33.083   11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:33.083   11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:33.083   11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:33.083    11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:33.084    11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:33.084    11:37:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:33.084    11:37:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:33.084    11:37:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:33.084   11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:33.084    "name": "raid_bdev1",
00:16:33.084    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:33.084    "strip_size_kb": 64,
00:16:33.084    "state": "online",
00:16:33.084    "raid_level": "raid5f",
00:16:33.084    "superblock": false,
00:16:33.084    "num_base_bdevs": 4,
00:16:33.084    "num_base_bdevs_discovered": 4,
00:16:33.084    "num_base_bdevs_operational": 4,
00:16:33.084    "process": {
00:16:33.084      "type": "rebuild",
00:16:33.084      "target": "spare",
00:16:33.084      "progress": {
00:16:33.084        "blocks": 153600,
00:16:33.084        "percent": 78
00:16:33.084      }
00:16:33.084    },
00:16:33.084    "base_bdevs_list": [
00:16:33.084      {
00:16:33.084        "name": "spare",
00:16:33.084        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:33.084        "is_configured": true,
00:16:33.084        "data_offset": 0,
00:16:33.084        "data_size": 65536
00:16:33.084      },
00:16:33.084      {
00:16:33.084        "name": "BaseBdev2",
00:16:33.084        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:33.084        "is_configured": true,
00:16:33.084        "data_offset": 0,
00:16:33.084        "data_size": 65536
00:16:33.084      },
00:16:33.084      {
00:16:33.084        "name": "BaseBdev3",
00:16:33.084        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:33.084        "is_configured": true,
00:16:33.084        "data_offset": 0,
00:16:33.084        "data_size": 65536
00:16:33.084      },
00:16:33.084      {
00:16:33.084        "name": "BaseBdev4",
00:16:33.084        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:33.084        "is_configured": true,
00:16:33.084        "data_offset": 0,
00:16:33.084        "data_size": 65536
00:16:33.084      }
00:16:33.084    ]
00:16:33.084  }'
00:16:33.084    11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:33.084   11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:33.084    11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:33.084   11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:33.084   11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:34.461   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:34.461   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:34.461   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:34.461   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:34.461   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:34.461   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:34.461    11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:34.461    11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:34.461    11:38:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:34.462    11:38:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:34.462    11:38:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:34.462   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:34.462    "name": "raid_bdev1",
00:16:34.462    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:34.462    "strip_size_kb": 64,
00:16:34.462    "state": "online",
00:16:34.462    "raid_level": "raid5f",
00:16:34.462    "superblock": false,
00:16:34.462    "num_base_bdevs": 4,
00:16:34.462    "num_base_bdevs_discovered": 4,
00:16:34.462    "num_base_bdevs_operational": 4,
00:16:34.462    "process": {
00:16:34.462      "type": "rebuild",
00:16:34.462      "target": "spare",
00:16:34.462      "progress": {
00:16:34.462        "blocks": 174720,
00:16:34.462        "percent": 88
00:16:34.462      }
00:16:34.462    },
00:16:34.462    "base_bdevs_list": [
00:16:34.462      {
00:16:34.462        "name": "spare",
00:16:34.462        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:34.462        "is_configured": true,
00:16:34.462        "data_offset": 0,
00:16:34.462        "data_size": 65536
00:16:34.462      },
00:16:34.462      {
00:16:34.462        "name": "BaseBdev2",
00:16:34.462        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:34.462        "is_configured": true,
00:16:34.462        "data_offset": 0,
00:16:34.462        "data_size": 65536
00:16:34.462      },
00:16:34.462      {
00:16:34.462        "name": "BaseBdev3",
00:16:34.462        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:34.462        "is_configured": true,
00:16:34.462        "data_offset": 0,
00:16:34.462        "data_size": 65536
00:16:34.462      },
00:16:34.462      {
00:16:34.462        "name": "BaseBdev4",
00:16:34.462        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:34.462        "is_configured": true,
00:16:34.462        "data_offset": 0,
00:16:34.462        "data_size": 65536
00:16:34.462      }
00:16:34.462    ]
00:16:34.462  }'
00:16:34.462    11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:34.462   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:34.462    11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:34.462   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:34.462   11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:35.400    11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:35.400    11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:35.400    11:38:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:35.400    11:38:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:35.400  [2024-12-16 11:38:01.275332] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:16:35.400  [2024-12-16 11:38:01.275467] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:16:35.400  [2024-12-16 11:38:01.275517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:35.400    11:38:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:35.400    "name": "raid_bdev1",
00:16:35.400    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:35.400    "strip_size_kb": 64,
00:16:35.400    "state": "online",
00:16:35.400    "raid_level": "raid5f",
00:16:35.400    "superblock": false,
00:16:35.400    "num_base_bdevs": 4,
00:16:35.400    "num_base_bdevs_discovered": 4,
00:16:35.400    "num_base_bdevs_operational": 4,
00:16:35.400    "process": {
00:16:35.400      "type": "rebuild",
00:16:35.400      "target": "spare",
00:16:35.400      "progress": {
00:16:35.400        "blocks": 195840,
00:16:35.400        "percent": 99
00:16:35.400      }
00:16:35.400    },
00:16:35.400    "base_bdevs_list": [
00:16:35.400      {
00:16:35.400        "name": "spare",
00:16:35.400        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:35.400        "is_configured": true,
00:16:35.400        "data_offset": 0,
00:16:35.400        "data_size": 65536
00:16:35.400      },
00:16:35.400      {
00:16:35.400        "name": "BaseBdev2",
00:16:35.400        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:35.400        "is_configured": true,
00:16:35.400        "data_offset": 0,
00:16:35.400        "data_size": 65536
00:16:35.400      },
00:16:35.400      {
00:16:35.400        "name": "BaseBdev3",
00:16:35.400        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:35.400        "is_configured": true,
00:16:35.400        "data_offset": 0,
00:16:35.400        "data_size": 65536
00:16:35.400      },
00:16:35.400      {
00:16:35.400        "name": "BaseBdev4",
00:16:35.400        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:35.400        "is_configured": true,
00:16:35.400        "data_offset": 0,
00:16:35.400        "data_size": 65536
00:16:35.400      }
00:16:35.400    ]
00:16:35.400  }'
00:16:35.400    11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:35.400    11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:35.400   11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:36.387   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:36.387   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:36.387   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:36.387   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:36.387   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:36.387   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:36.387    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:36.387    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:36.387    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:36.387    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:36.387    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:36.647    "name": "raid_bdev1",
00:16:36.647    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:36.647    "strip_size_kb": 64,
00:16:36.647    "state": "online",
00:16:36.647    "raid_level": "raid5f",
00:16:36.647    "superblock": false,
00:16:36.647    "num_base_bdevs": 4,
00:16:36.647    "num_base_bdevs_discovered": 4,
00:16:36.647    "num_base_bdevs_operational": 4,
00:16:36.647    "base_bdevs_list": [
00:16:36.647      {
00:16:36.647        "name": "spare",
00:16:36.647        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      },
00:16:36.647      {
00:16:36.647        "name": "BaseBdev2",
00:16:36.647        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      },
00:16:36.647      {
00:16:36.647        "name": "BaseBdev3",
00:16:36.647        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      },
00:16:36.647      {
00:16:36.647        "name": "BaseBdev4",
00:16:36.647        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      }
00:16:36.647    ]
00:16:36.647  }'
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:36.647    "name": "raid_bdev1",
00:16:36.647    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:36.647    "strip_size_kb": 64,
00:16:36.647    "state": "online",
00:16:36.647    "raid_level": "raid5f",
00:16:36.647    "superblock": false,
00:16:36.647    "num_base_bdevs": 4,
00:16:36.647    "num_base_bdevs_discovered": 4,
00:16:36.647    "num_base_bdevs_operational": 4,
00:16:36.647    "base_bdevs_list": [
00:16:36.647      {
00:16:36.647        "name": "spare",
00:16:36.647        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      },
00:16:36.647      {
00:16:36.647        "name": "BaseBdev2",
00:16:36.647        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      },
00:16:36.647      {
00:16:36.647        "name": "BaseBdev3",
00:16:36.647        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      },
00:16:36.647      {
00:16:36.647        "name": "BaseBdev4",
00:16:36.647        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:36.647        "is_configured": true,
00:16:36.647        "data_offset": 0,
00:16:36.647        "data_size": 65536
00:16:36.647      }
00:16:36.647    ]
00:16:36.647  }'
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:36.647   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:16:36.647    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:36.906    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:36.906    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:36.906    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:36.906    11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:36.906    11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:36.906    "name": "raid_bdev1",
00:16:36.906    "uuid": "b61f53b9-dd36-431a-9f05-99bb13f17cd0",
00:16:36.906    "strip_size_kb": 64,
00:16:36.906    "state": "online",
00:16:36.906    "raid_level": "raid5f",
00:16:36.906    "superblock": false,
00:16:36.906    "num_base_bdevs": 4,
00:16:36.906    "num_base_bdevs_discovered": 4,
00:16:36.906    "num_base_bdevs_operational": 4,
00:16:36.906    "base_bdevs_list": [
00:16:36.906      {
00:16:36.906        "name": "spare",
00:16:36.906        "uuid": "29631110-13b4-5081-8735-5cb13c0478c5",
00:16:36.906        "is_configured": true,
00:16:36.906        "data_offset": 0,
00:16:36.906        "data_size": 65536
00:16:36.906      },
00:16:36.906      {
00:16:36.906        "name": "BaseBdev2",
00:16:36.906        "uuid": "a09ac6f9-fd59-5e87-89f0-95a821c5d57c",
00:16:36.906        "is_configured": true,
00:16:36.906        "data_offset": 0,
00:16:36.906        "data_size": 65536
00:16:36.906      },
00:16:36.906      {
00:16:36.906        "name": "BaseBdev3",
00:16:36.906        "uuid": "84aba746-a1ca-5d14-8ba9-77e9ec0ed9a5",
00:16:36.906        "is_configured": true,
00:16:36.906        "data_offset": 0,
00:16:36.906        "data_size": 65536
00:16:36.906      },
00:16:36.906      {
00:16:36.906        "name": "BaseBdev4",
00:16:36.906        "uuid": "1a411fd9-5e50-5c57-9d89-ffa149d9ff0e",
00:16:36.906        "is_configured": true,
00:16:36.906        "data_offset": 0,
00:16:36.906        "data_size": 65536
00:16:36.906      }
00:16:36.906    ]
00:16:36.906  }'
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:36.906   11:38:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:37.166  [2024-12-16 11:38:03.141805] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:37.166  [2024-12-16 11:38:03.141892] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:37.166  [2024-12-16 11:38:03.142017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:37.166  [2024-12-16 11:38:03.142146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:37.166  [2024-12-16 11:38:03.142200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:37.166    11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length
00:16:37.166    11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:37.166    11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:37.166    11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:37.166    11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:16:37.166   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:16:37.426  /dev/nbd0
00:16:37.426    11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:16:37.426  1+0 records in
00:16:37.426  1+0 records out
00:16:37.426  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391167 s, 10.5 MB/s
00:16:37.426    11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:16:37.426   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:16:37.686  /dev/nbd1
00:16:37.686    11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:16:37.686  1+0 records in
00:16:37.686  1+0 records out
00:16:37.686  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272117 s, 15.1 MB/s
00:16:37.686    11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:16:37.686   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:16:37.946    11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:16:37.946   11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:16:38.205    11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']'
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95393
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95393 ']'
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95393
00:16:38.205    11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:16:38.205    11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95393
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:16:38.205  killing process with pid 95393
00:16:38.205  Received shutdown signal, test time was about 60.000000 seconds
00:16:38.205  
00:16:38.205                                                                                                  Latency(us)
00:16:38.205  
[2024-12-16T11:38:04.272Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:38.205  
[2024-12-16T11:38:04.272Z]  ===================================================================================================================
00:16:38.205  
[2024-12-16T11:38:04.272Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95393'
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95393
00:16:38.205  [2024-12-16 11:38:04.182718] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:38.205   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95393
00:16:38.205  [2024-12-16 11:38:04.234997] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:38.464   11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0
00:16:38.464  
00:16:38.464  real	0m18.419s
00:16:38.464  user	0m22.331s
00:16:38.464  sys	0m2.256s
00:16:38.464  ************************************
00:16:38.464  END TEST raid5f_rebuild_test
00:16:38.464  ************************************
00:16:38.465   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable
00:16:38.465   11:38:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x
00:16:38.465   11:38:04 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true
00:16:38.465   11:38:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:16:38.465   11:38:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:16:38.465   11:38:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:16:38.725  ************************************
00:16:38.725  START TEST raid5f_rebuild_test_sb
00:16:38.725  ************************************
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:16:38.725    11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']'
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']'
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64'
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95893
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95893
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95893 ']'
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:38.725  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable
00:16:38.725   11:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:38.725  [2024-12-16 11:38:04.632187] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:16:38.725  [2024-12-16 11:38:04.632431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95893 ]
00:16:38.725  I/O size of 3145728 is greater than zero copy threshold (65536).
00:16:38.725  Zero copy mechanism will not be used.
00:16:38.984  [2024-12-16 11:38:04.795020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:38.984  [2024-12-16 11:38:04.843596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:16:38.984  [2024-12-16 11:38:04.893580] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:38.984  [2024-12-16 11:38:04.893725] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  BaseBdev1_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  [2024-12-16 11:38:05.485712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:16:39.552  [2024-12-16 11:38:05.485835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:39.552  [2024-12-16 11:38:05.485881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:16:39.552  [2024-12-16 11:38:05.485917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:39.552  [2024-12-16 11:38:05.488331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:39.552  [2024-12-16 11:38:05.488417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:16:39.552  BaseBdev1
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  BaseBdev2_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  [2024-12-16 11:38:05.524931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:16:39.552  [2024-12-16 11:38:05.525043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:39.552  [2024-12-16 11:38:05.525089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:16:39.552  [2024-12-16 11:38:05.525124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:39.552  [2024-12-16 11:38:05.527615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:39.552  [2024-12-16 11:38:05.527688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:16:39.552  BaseBdev2
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  BaseBdev3_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  [2024-12-16 11:38:05.553395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:16:39.552  [2024-12-16 11:38:05.553445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:39.552  [2024-12-16 11:38:05.553469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:16:39.552  [2024-12-16 11:38:05.553477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:39.552  [2024-12-16 11:38:05.555588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:39.552  [2024-12-16 11:38:05.555624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:16:39.552  BaseBdev3
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  BaseBdev4_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  [2024-12-16 11:38:05.581826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:16:39.552  [2024-12-16 11:38:05.581881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:39.552  [2024-12-16 11:38:05.581905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:16:39.552  [2024-12-16 11:38:05.581913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:39.552  [2024-12-16 11:38:05.584000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:39.552  [2024-12-16 11:38:05.584037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:16:39.552  BaseBdev4
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  spare_malloc
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.552  spare_delay
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.552   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.811  [2024-12-16 11:38:05.622280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:16:39.811  [2024-12-16 11:38:05.622337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:39.811  [2024-12-16 11:38:05.622358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:16:39.811  [2024-12-16 11:38:05.622366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:39.811  [2024-12-16 11:38:05.624456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:39.811  [2024-12-16 11:38:05.624583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:16:39.811  spare
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.811  [2024-12-16 11:38:05.634348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:39.811  [2024-12-16 11:38:05.636196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:39.811  [2024-12-16 11:38:05.636342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:39.811  [2024-12-16 11:38:05.636387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:16:39.811  [2024-12-16 11:38:05.636564] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:16:39.811  [2024-12-16 11:38:05.636577] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:39.811  [2024-12-16 11:38:05.636826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:16:39.811  [2024-12-16 11:38:05.637274] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:16:39.811  [2024-12-16 11:38:05.637289] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:16:39.811  [2024-12-16 11:38:05.637414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:39.811    11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:39.811    11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:39.811    11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:39.811    11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:39.811    11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:39.811   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:39.811    "name": "raid_bdev1",
00:16:39.811    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:39.811    "strip_size_kb": 64,
00:16:39.811    "state": "online",
00:16:39.811    "raid_level": "raid5f",
00:16:39.811    "superblock": true,
00:16:39.811    "num_base_bdevs": 4,
00:16:39.811    "num_base_bdevs_discovered": 4,
00:16:39.811    "num_base_bdevs_operational": 4,
00:16:39.811    "base_bdevs_list": [
00:16:39.811      {
00:16:39.811        "name": "BaseBdev1",
00:16:39.811        "uuid": "dd20daa5-ff07-5a3b-987a-81f12b94253b",
00:16:39.811        "is_configured": true,
00:16:39.811        "data_offset": 2048,
00:16:39.811        "data_size": 63488
00:16:39.811      },
00:16:39.811      {
00:16:39.811        "name": "BaseBdev2",
00:16:39.811        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:39.811        "is_configured": true,
00:16:39.811        "data_offset": 2048,
00:16:39.811        "data_size": 63488
00:16:39.811      },
00:16:39.811      {
00:16:39.811        "name": "BaseBdev3",
00:16:39.811        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:39.811        "is_configured": true,
00:16:39.811        "data_offset": 2048,
00:16:39.812        "data_size": 63488
00:16:39.812      },
00:16:39.812      {
00:16:39.812        "name": "BaseBdev4",
00:16:39.812        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:39.812        "is_configured": true,
00:16:39.812        "data_offset": 2048,
00:16:39.812        "data_size": 63488
00:16:39.812      }
00:16:39.812    ]
00:16:39.812  }'
00:16:39.812   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:39.812   11:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:40.071    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:16:40.071    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:40.071    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:16:40.071    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:40.071  [2024-12-16 11:38:06.118574] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:40.071    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464
00:16:40.331    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:16:40.331    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:40.331    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:40.331    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:40.331    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:16:40.331  [2024-12-16 11:38:06.353998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:16:40.331  /dev/nbd0
00:16:40.331    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:16:40.331   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:16:40.590  1+0 records in
00:16:40.590  1+0 records out
00:16:40.590  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414637 s, 9.9 MB/s
00:16:40.590    11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']'
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192
00:16:40.590   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct
00:16:40.848  496+0 records in
00:16:40.849  496+0 records out
00:16:40.849  97517568 bytes (98 MB, 93 MiB) copied, 0.412689 s, 236 MB/s
00:16:40.849   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:16:40.849   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:16:40.849   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:16:40.849   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:16:40.849   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:16:40.849   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:16:40.849   11:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:16:41.108    11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:16:41.108  [2024-12-16 11:38:07.051965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:41.108  [2024-12-16 11:38:07.076031] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:41.108    11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:41.108    11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:41.108    11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:41.108    11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:41.108    11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:41.108    "name": "raid_bdev1",
00:16:41.108    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:41.108    "strip_size_kb": 64,
00:16:41.108    "state": "online",
00:16:41.108    "raid_level": "raid5f",
00:16:41.108    "superblock": true,
00:16:41.108    "num_base_bdevs": 4,
00:16:41.108    "num_base_bdevs_discovered": 3,
00:16:41.108    "num_base_bdevs_operational": 3,
00:16:41.108    "base_bdevs_list": [
00:16:41.108      {
00:16:41.108        "name": null,
00:16:41.108        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:41.108        "is_configured": false,
00:16:41.108        "data_offset": 0,
00:16:41.108        "data_size": 63488
00:16:41.108      },
00:16:41.108      {
00:16:41.108        "name": "BaseBdev2",
00:16:41.108        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:41.108        "is_configured": true,
00:16:41.108        "data_offset": 2048,
00:16:41.108        "data_size": 63488
00:16:41.108      },
00:16:41.108      {
00:16:41.108        "name": "BaseBdev3",
00:16:41.108        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:41.108        "is_configured": true,
00:16:41.108        "data_offset": 2048,
00:16:41.108        "data_size": 63488
00:16:41.108      },
00:16:41.108      {
00:16:41.108        "name": "BaseBdev4",
00:16:41.108        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:41.108        "is_configured": true,
00:16:41.108        "data_offset": 2048,
00:16:41.108        "data_size": 63488
00:16:41.108      }
00:16:41.108    ]
00:16:41.108  }'
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:41.108   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:41.675   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:16:41.675   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:41.675   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:41.675  [2024-12-16 11:38:07.491511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:41.675  [2024-12-16 11:38:07.495659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0
00:16:41.675   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:41.676   11:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1
00:16:41.676  [2024-12-16 11:38:07.498277] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:42.613    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:42.613    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:42.613    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.613    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:42.613    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:42.613    "name": "raid_bdev1",
00:16:42.613    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:42.613    "strip_size_kb": 64,
00:16:42.613    "state": "online",
00:16:42.613    "raid_level": "raid5f",
00:16:42.613    "superblock": true,
00:16:42.613    "num_base_bdevs": 4,
00:16:42.613    "num_base_bdevs_discovered": 4,
00:16:42.613    "num_base_bdevs_operational": 4,
00:16:42.613    "process": {
00:16:42.613      "type": "rebuild",
00:16:42.613      "target": "spare",
00:16:42.613      "progress": {
00:16:42.613        "blocks": 19200,
00:16:42.613        "percent": 10
00:16:42.613      }
00:16:42.613    },
00:16:42.613    "base_bdevs_list": [
00:16:42.613      {
00:16:42.613        "name": "spare",
00:16:42.613        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:42.613        "is_configured": true,
00:16:42.613        "data_offset": 2048,
00:16:42.613        "data_size": 63488
00:16:42.613      },
00:16:42.613      {
00:16:42.613        "name": "BaseBdev2",
00:16:42.613        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:42.613        "is_configured": true,
00:16:42.613        "data_offset": 2048,
00:16:42.613        "data_size": 63488
00:16:42.613      },
00:16:42.613      {
00:16:42.613        "name": "BaseBdev3",
00:16:42.613        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:42.613        "is_configured": true,
00:16:42.613        "data_offset": 2048,
00:16:42.613        "data_size": 63488
00:16:42.613      },
00:16:42.613      {
00:16:42.613        "name": "BaseBdev4",
00:16:42.613        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:42.613        "is_configured": true,
00:16:42.613        "data_offset": 2048,
00:16:42.613        "data_size": 63488
00:16:42.613      }
00:16:42.613    ]
00:16:42.613  }'
00:16:42.613    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:42.613    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.613   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:42.613  [2024-12-16 11:38:08.662721] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:42.872  [2024-12-16 11:38:08.706660] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:16:42.872  [2024-12-16 11:38:08.706722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:42.872  [2024-12-16 11:38:08.706742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:42.872  [2024-12-16 11:38:08.706750] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:42.872   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:42.872    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:42.872    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:42.872    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:42.872    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:42.873    11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:42.873   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:42.873    "name": "raid_bdev1",
00:16:42.873    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:42.873    "strip_size_kb": 64,
00:16:42.873    "state": "online",
00:16:42.873    "raid_level": "raid5f",
00:16:42.873    "superblock": true,
00:16:42.873    "num_base_bdevs": 4,
00:16:42.873    "num_base_bdevs_discovered": 3,
00:16:42.873    "num_base_bdevs_operational": 3,
00:16:42.873    "base_bdevs_list": [
00:16:42.873      {
00:16:42.873        "name": null,
00:16:42.873        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:42.873        "is_configured": false,
00:16:42.873        "data_offset": 0,
00:16:42.873        "data_size": 63488
00:16:42.873      },
00:16:42.873      {
00:16:42.873        "name": "BaseBdev2",
00:16:42.873        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:42.873        "is_configured": true,
00:16:42.873        "data_offset": 2048,
00:16:42.873        "data_size": 63488
00:16:42.873      },
00:16:42.873      {
00:16:42.873        "name": "BaseBdev3",
00:16:42.873        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:42.873        "is_configured": true,
00:16:42.873        "data_offset": 2048,
00:16:42.873        "data_size": 63488
00:16:42.873      },
00:16:42.873      {
00:16:42.873        "name": "BaseBdev4",
00:16:42.873        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:42.873        "is_configured": true,
00:16:42.873        "data_offset": 2048,
00:16:42.873        "data_size": 63488
00:16:42.873      }
00:16:42.873    ]
00:16:42.873  }'
00:16:42.873   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:42.873   11:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:43.131   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:16:43.131   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:43.131   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:16:43.131   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:16:43.131   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:43.131    11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:43.131    11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:43.131    11:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:43.132    11:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:43.132    11:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:43.391    "name": "raid_bdev1",
00:16:43.391    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:43.391    "strip_size_kb": 64,
00:16:43.391    "state": "online",
00:16:43.391    "raid_level": "raid5f",
00:16:43.391    "superblock": true,
00:16:43.391    "num_base_bdevs": 4,
00:16:43.391    "num_base_bdevs_discovered": 3,
00:16:43.391    "num_base_bdevs_operational": 3,
00:16:43.391    "base_bdevs_list": [
00:16:43.391      {
00:16:43.391        "name": null,
00:16:43.391        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:43.391        "is_configured": false,
00:16:43.391        "data_offset": 0,
00:16:43.391        "data_size": 63488
00:16:43.391      },
00:16:43.391      {
00:16:43.391        "name": "BaseBdev2",
00:16:43.391        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:43.391        "is_configured": true,
00:16:43.391        "data_offset": 2048,
00:16:43.391        "data_size": 63488
00:16:43.391      },
00:16:43.391      {
00:16:43.391        "name": "BaseBdev3",
00:16:43.391        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:43.391        "is_configured": true,
00:16:43.391        "data_offset": 2048,
00:16:43.391        "data_size": 63488
00:16:43.391      },
00:16:43.391      {
00:16:43.391        "name": "BaseBdev4",
00:16:43.391        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:43.391        "is_configured": true,
00:16:43.391        "data_offset": 2048,
00:16:43.391        "data_size": 63488
00:16:43.391      }
00:16:43.391    ]
00:16:43.391  }'
00:16:43.391    11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:16:43.391    11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:43.391  [2024-12-16 11:38:09.283396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:43.391  [2024-12-16 11:38:09.286889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980
00:16:43.391  [2024-12-16 11:38:09.289327] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:43.391   11:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1
00:16:44.328   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:44.328   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:44.328   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:44.328   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:44.328   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:44.328    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:44.328    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:44.328    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:44.328    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:44.328    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:44.328   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:44.328    "name": "raid_bdev1",
00:16:44.328    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:44.328    "strip_size_kb": 64,
00:16:44.328    "state": "online",
00:16:44.328    "raid_level": "raid5f",
00:16:44.328    "superblock": true,
00:16:44.328    "num_base_bdevs": 4,
00:16:44.328    "num_base_bdevs_discovered": 4,
00:16:44.328    "num_base_bdevs_operational": 4,
00:16:44.328    "process": {
00:16:44.328      "type": "rebuild",
00:16:44.328      "target": "spare",
00:16:44.328      "progress": {
00:16:44.328        "blocks": 19200,
00:16:44.328        "percent": 10
00:16:44.328      }
00:16:44.328    },
00:16:44.328    "base_bdevs_list": [
00:16:44.328      {
00:16:44.328        "name": "spare",
00:16:44.328        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:44.328        "is_configured": true,
00:16:44.328        "data_offset": 2048,
00:16:44.328        "data_size": 63488
00:16:44.328      },
00:16:44.328      {
00:16:44.328        "name": "BaseBdev2",
00:16:44.328        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:44.328        "is_configured": true,
00:16:44.328        "data_offset": 2048,
00:16:44.328        "data_size": 63488
00:16:44.328      },
00:16:44.328      {
00:16:44.328        "name": "BaseBdev3",
00:16:44.328        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:44.328        "is_configured": true,
00:16:44.328        "data_offset": 2048,
00:16:44.328        "data_size": 63488
00:16:44.328      },
00:16:44.328      {
00:16:44.328        "name": "BaseBdev4",
00:16:44.328        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:44.328        "is_configured": true,
00:16:44.328        "data_offset": 2048,
00:16:44.328        "data_size": 63488
00:16:44.328      }
00:16:44.328    ]
00:16:44.328  }'
00:16:44.328    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:16:44.594  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']'
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=543
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:44.594    "name": "raid_bdev1",
00:16:44.594    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:44.594    "strip_size_kb": 64,
00:16:44.594    "state": "online",
00:16:44.594    "raid_level": "raid5f",
00:16:44.594    "superblock": true,
00:16:44.594    "num_base_bdevs": 4,
00:16:44.594    "num_base_bdevs_discovered": 4,
00:16:44.594    "num_base_bdevs_operational": 4,
00:16:44.594    "process": {
00:16:44.594      "type": "rebuild",
00:16:44.594      "target": "spare",
00:16:44.594      "progress": {
00:16:44.594        "blocks": 21120,
00:16:44.594        "percent": 11
00:16:44.594      }
00:16:44.594    },
00:16:44.594    "base_bdevs_list": [
00:16:44.594      {
00:16:44.594        "name": "spare",
00:16:44.594        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:44.594        "is_configured": true,
00:16:44.594        "data_offset": 2048,
00:16:44.594        "data_size": 63488
00:16:44.594      },
00:16:44.594      {
00:16:44.594        "name": "BaseBdev2",
00:16:44.594        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:44.594        "is_configured": true,
00:16:44.594        "data_offset": 2048,
00:16:44.594        "data_size": 63488
00:16:44.594      },
00:16:44.594      {
00:16:44.594        "name": "BaseBdev3",
00:16:44.594        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:44.594        "is_configured": true,
00:16:44.594        "data_offset": 2048,
00:16:44.594        "data_size": 63488
00:16:44.594      },
00:16:44.594      {
00:16:44.594        "name": "BaseBdev4",
00:16:44.594        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:44.594        "is_configured": true,
00:16:44.594        "data_offset": 2048,
00:16:44.594        "data_size": 63488
00:16:44.594      }
00:16:44.594    ]
00:16:44.594  }'
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:44.594   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:44.594    11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:44.595   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:44.595   11:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:45.543   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:45.543   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:45.543   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:45.543   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:45.543   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:45.543   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:45.543    11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:45.543    11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:45.543    11:38:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:45.543    11:38:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:45.802    11:38:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:45.802   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:45.802    "name": "raid_bdev1",
00:16:45.802    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:45.802    "strip_size_kb": 64,
00:16:45.802    "state": "online",
00:16:45.802    "raid_level": "raid5f",
00:16:45.802    "superblock": true,
00:16:45.802    "num_base_bdevs": 4,
00:16:45.802    "num_base_bdevs_discovered": 4,
00:16:45.802    "num_base_bdevs_operational": 4,
00:16:45.802    "process": {
00:16:45.802      "type": "rebuild",
00:16:45.802      "target": "spare",
00:16:45.802      "progress": {
00:16:45.802        "blocks": 42240,
00:16:45.802        "percent": 22
00:16:45.802      }
00:16:45.802    },
00:16:45.802    "base_bdevs_list": [
00:16:45.802      {
00:16:45.802        "name": "spare",
00:16:45.802        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:45.802        "is_configured": true,
00:16:45.802        "data_offset": 2048,
00:16:45.802        "data_size": 63488
00:16:45.802      },
00:16:45.802      {
00:16:45.802        "name": "BaseBdev2",
00:16:45.802        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:45.802        "is_configured": true,
00:16:45.802        "data_offset": 2048,
00:16:45.802        "data_size": 63488
00:16:45.802      },
00:16:45.802      {
00:16:45.802        "name": "BaseBdev3",
00:16:45.802        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:45.802        "is_configured": true,
00:16:45.802        "data_offset": 2048,
00:16:45.802        "data_size": 63488
00:16:45.802      },
00:16:45.802      {
00:16:45.802        "name": "BaseBdev4",
00:16:45.802        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:45.802        "is_configured": true,
00:16:45.802        "data_offset": 2048,
00:16:45.802        "data_size": 63488
00:16:45.802      }
00:16:45.802    ]
00:16:45.802  }'
00:16:45.802    11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:45.802   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:45.802    11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:45.802   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:45.802   11:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:46.740   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:46.741   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:46.741   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:46.741   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:46.741   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:46.741   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:46.741    11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:46.741    11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:46.741    11:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:46.741    11:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:46.741    11:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:46.741   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:46.741    "name": "raid_bdev1",
00:16:46.741    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:46.741    "strip_size_kb": 64,
00:16:46.741    "state": "online",
00:16:46.741    "raid_level": "raid5f",
00:16:46.741    "superblock": true,
00:16:46.741    "num_base_bdevs": 4,
00:16:46.741    "num_base_bdevs_discovered": 4,
00:16:46.741    "num_base_bdevs_operational": 4,
00:16:46.741    "process": {
00:16:46.741      "type": "rebuild",
00:16:46.741      "target": "spare",
00:16:46.741      "progress": {
00:16:46.741        "blocks": 65280,
00:16:46.741        "percent": 34
00:16:46.741      }
00:16:46.741    },
00:16:46.741    "base_bdevs_list": [
00:16:46.741      {
00:16:46.741        "name": "spare",
00:16:46.741        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:46.741        "is_configured": true,
00:16:46.741        "data_offset": 2048,
00:16:46.741        "data_size": 63488
00:16:46.741      },
00:16:46.741      {
00:16:46.741        "name": "BaseBdev2",
00:16:46.741        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:46.741        "is_configured": true,
00:16:46.741        "data_offset": 2048,
00:16:46.741        "data_size": 63488
00:16:46.741      },
00:16:46.741      {
00:16:46.741        "name": "BaseBdev3",
00:16:46.741        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:46.741        "is_configured": true,
00:16:46.741        "data_offset": 2048,
00:16:46.741        "data_size": 63488
00:16:46.741      },
00:16:46.741      {
00:16:46.741        "name": "BaseBdev4",
00:16:46.741        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:46.741        "is_configured": true,
00:16:46.741        "data_offset": 2048,
00:16:46.741        "data_size": 63488
00:16:46.741      }
00:16:46.741    ]
00:16:46.741  }'
00:16:46.741    11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:47.000   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:47.000    11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:47.000   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:47.000   11:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:47.938    11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:47.938    11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:47.938    11:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:47.938    11:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:47.938    11:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:47.938    "name": "raid_bdev1",
00:16:47.938    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:47.938    "strip_size_kb": 64,
00:16:47.938    "state": "online",
00:16:47.938    "raid_level": "raid5f",
00:16:47.938    "superblock": true,
00:16:47.938    "num_base_bdevs": 4,
00:16:47.938    "num_base_bdevs_discovered": 4,
00:16:47.938    "num_base_bdevs_operational": 4,
00:16:47.938    "process": {
00:16:47.938      "type": "rebuild",
00:16:47.938      "target": "spare",
00:16:47.938      "progress": {
00:16:47.938        "blocks": 86400,
00:16:47.938        "percent": 45
00:16:47.938      }
00:16:47.938    },
00:16:47.938    "base_bdevs_list": [
00:16:47.938      {
00:16:47.938        "name": "spare",
00:16:47.938        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:47.938        "is_configured": true,
00:16:47.938        "data_offset": 2048,
00:16:47.938        "data_size": 63488
00:16:47.938      },
00:16:47.938      {
00:16:47.938        "name": "BaseBdev2",
00:16:47.938        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:47.938        "is_configured": true,
00:16:47.938        "data_offset": 2048,
00:16:47.938        "data_size": 63488
00:16:47.938      },
00:16:47.938      {
00:16:47.938        "name": "BaseBdev3",
00:16:47.938        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:47.938        "is_configured": true,
00:16:47.938        "data_offset": 2048,
00:16:47.938        "data_size": 63488
00:16:47.938      },
00:16:47.938      {
00:16:47.938        "name": "BaseBdev4",
00:16:47.938        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:47.938        "is_configured": true,
00:16:47.938        "data_offset": 2048,
00:16:47.938        "data_size": 63488
00:16:47.938      }
00:16:47.938    ]
00:16:47.938  }'
00:16:47.938    11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:47.938   11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:47.938    11:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:48.198   11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:48.198   11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:49.137    11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:49.137    11:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:49.137    11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:49.137    11:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:49.137    11:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:49.137    "name": "raid_bdev1",
00:16:49.137    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:49.137    "strip_size_kb": 64,
00:16:49.137    "state": "online",
00:16:49.137    "raid_level": "raid5f",
00:16:49.137    "superblock": true,
00:16:49.137    "num_base_bdevs": 4,
00:16:49.137    "num_base_bdevs_discovered": 4,
00:16:49.137    "num_base_bdevs_operational": 4,
00:16:49.137    "process": {
00:16:49.137      "type": "rebuild",
00:16:49.137      "target": "spare",
00:16:49.137      "progress": {
00:16:49.137        "blocks": 109440,
00:16:49.137        "percent": 57
00:16:49.137      }
00:16:49.137    },
00:16:49.137    "base_bdevs_list": [
00:16:49.137      {
00:16:49.137        "name": "spare",
00:16:49.137        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:49.137        "is_configured": true,
00:16:49.137        "data_offset": 2048,
00:16:49.137        "data_size": 63488
00:16:49.137      },
00:16:49.137      {
00:16:49.137        "name": "BaseBdev2",
00:16:49.137        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:49.137        "is_configured": true,
00:16:49.137        "data_offset": 2048,
00:16:49.137        "data_size": 63488
00:16:49.137      },
00:16:49.137      {
00:16:49.137        "name": "BaseBdev3",
00:16:49.137        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:49.137        "is_configured": true,
00:16:49.137        "data_offset": 2048,
00:16:49.137        "data_size": 63488
00:16:49.137      },
00:16:49.137      {
00:16:49.137        "name": "BaseBdev4",
00:16:49.137        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:49.137        "is_configured": true,
00:16:49.137        "data_offset": 2048,
00:16:49.137        "data_size": 63488
00:16:49.137      }
00:16:49.137    ]
00:16:49.137  }'
00:16:49.137    11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:49.137    11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:49.137   11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:50.518   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:50.518   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:50.518   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:50.518   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:50.518   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:50.518   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:50.518    11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:50.518    11:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:50.518    11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:50.518    11:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:50.518    11:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:50.518   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:50.518    "name": "raid_bdev1",
00:16:50.518    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:50.518    "strip_size_kb": 64,
00:16:50.518    "state": "online",
00:16:50.518    "raid_level": "raid5f",
00:16:50.518    "superblock": true,
00:16:50.518    "num_base_bdevs": 4,
00:16:50.518    "num_base_bdevs_discovered": 4,
00:16:50.518    "num_base_bdevs_operational": 4,
00:16:50.518    "process": {
00:16:50.518      "type": "rebuild",
00:16:50.518      "target": "spare",
00:16:50.518      "progress": {
00:16:50.518        "blocks": 130560,
00:16:50.518        "percent": 68
00:16:50.518      }
00:16:50.518    },
00:16:50.519    "base_bdevs_list": [
00:16:50.519      {
00:16:50.519        "name": "spare",
00:16:50.519        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:50.519        "is_configured": true,
00:16:50.519        "data_offset": 2048,
00:16:50.519        "data_size": 63488
00:16:50.519      },
00:16:50.519      {
00:16:50.519        "name": "BaseBdev2",
00:16:50.519        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:50.519        "is_configured": true,
00:16:50.519        "data_offset": 2048,
00:16:50.519        "data_size": 63488
00:16:50.519      },
00:16:50.519      {
00:16:50.519        "name": "BaseBdev3",
00:16:50.519        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:50.519        "is_configured": true,
00:16:50.519        "data_offset": 2048,
00:16:50.519        "data_size": 63488
00:16:50.519      },
00:16:50.519      {
00:16:50.519        "name": "BaseBdev4",
00:16:50.519        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:50.519        "is_configured": true,
00:16:50.519        "data_offset": 2048,
00:16:50.519        "data_size": 63488
00:16:50.519      }
00:16:50.519    ]
00:16:50.519  }'
00:16:50.519    11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:50.519   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:50.519    11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:50.519   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:50.519   11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:51.458   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:51.458   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:51.458   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:51.458   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:51.458   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:51.458   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:51.459    11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:51.459    11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:51.459    11:38:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:51.459    11:38:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:51.459    11:38:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:51.459   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:51.459    "name": "raid_bdev1",
00:16:51.459    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:51.459    "strip_size_kb": 64,
00:16:51.459    "state": "online",
00:16:51.459    "raid_level": "raid5f",
00:16:51.459    "superblock": true,
00:16:51.459    "num_base_bdevs": 4,
00:16:51.459    "num_base_bdevs_discovered": 4,
00:16:51.459    "num_base_bdevs_operational": 4,
00:16:51.459    "process": {
00:16:51.459      "type": "rebuild",
00:16:51.459      "target": "spare",
00:16:51.459      "progress": {
00:16:51.459        "blocks": 151680,
00:16:51.459        "percent": 79
00:16:51.459      }
00:16:51.459    },
00:16:51.459    "base_bdevs_list": [
00:16:51.459      {
00:16:51.459        "name": "spare",
00:16:51.459        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:51.459        "is_configured": true,
00:16:51.459        "data_offset": 2048,
00:16:51.459        "data_size": 63488
00:16:51.459      },
00:16:51.459      {
00:16:51.459        "name": "BaseBdev2",
00:16:51.459        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:51.459        "is_configured": true,
00:16:51.459        "data_offset": 2048,
00:16:51.459        "data_size": 63488
00:16:51.459      },
00:16:51.459      {
00:16:51.459        "name": "BaseBdev3",
00:16:51.459        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:51.459        "is_configured": true,
00:16:51.459        "data_offset": 2048,
00:16:51.459        "data_size": 63488
00:16:51.459      },
00:16:51.459      {
00:16:51.459        "name": "BaseBdev4",
00:16:51.459        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:51.459        "is_configured": true,
00:16:51.459        "data_offset": 2048,
00:16:51.459        "data_size": 63488
00:16:51.459      }
00:16:51.459    ]
00:16:51.459  }'
00:16:51.459    11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:51.459   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:51.459    11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:51.459   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:51.459   11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:52.398   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:52.398   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:52.658   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:52.658   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:52.658   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:52.658   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:52.658    11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:52.658    11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:52.658    11:38:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:52.658    11:38:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:52.658    11:38:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:52.658   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:52.658    "name": "raid_bdev1",
00:16:52.658    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:52.658    "strip_size_kb": 64,
00:16:52.658    "state": "online",
00:16:52.658    "raid_level": "raid5f",
00:16:52.658    "superblock": true,
00:16:52.658    "num_base_bdevs": 4,
00:16:52.658    "num_base_bdevs_discovered": 4,
00:16:52.658    "num_base_bdevs_operational": 4,
00:16:52.658    "process": {
00:16:52.658      "type": "rebuild",
00:16:52.658      "target": "spare",
00:16:52.658      "progress": {
00:16:52.658        "blocks": 174720,
00:16:52.658        "percent": 91
00:16:52.658      }
00:16:52.658    },
00:16:52.658    "base_bdevs_list": [
00:16:52.658      {
00:16:52.658        "name": "spare",
00:16:52.658        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:52.658        "is_configured": true,
00:16:52.658        "data_offset": 2048,
00:16:52.658        "data_size": 63488
00:16:52.658      },
00:16:52.658      {
00:16:52.658        "name": "BaseBdev2",
00:16:52.658        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:52.658        "is_configured": true,
00:16:52.658        "data_offset": 2048,
00:16:52.658        "data_size": 63488
00:16:52.658      },
00:16:52.658      {
00:16:52.658        "name": "BaseBdev3",
00:16:52.658        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:52.658        "is_configured": true,
00:16:52.658        "data_offset": 2048,
00:16:52.659        "data_size": 63488
00:16:52.659      },
00:16:52.659      {
00:16:52.659        "name": "BaseBdev4",
00:16:52.659        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:52.659        "is_configured": true,
00:16:52.659        "data_offset": 2048,
00:16:52.659        "data_size": 63488
00:16:52.659      }
00:16:52.659    ]
00:16:52.659  }'
00:16:52.659    11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:52.659   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:52.659    11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:52.659   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:52.659   11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1
00:16:53.596  [2024-12-16 11:38:19.345765] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:16:53.596  [2024-12-16 11:38:19.345908] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:16:53.596  [2024-12-16 11:38:19.346073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:53.596   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:16:53.596   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:53.596   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:53.596   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:53.596   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:53.596   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:53.596    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:53.596    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:53.596    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:53.596    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:53.596    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:53.596   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:53.596    "name": "raid_bdev1",
00:16:53.596    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:53.596    "strip_size_kb": 64,
00:16:53.596    "state": "online",
00:16:53.596    "raid_level": "raid5f",
00:16:53.596    "superblock": true,
00:16:53.596    "num_base_bdevs": 4,
00:16:53.596    "num_base_bdevs_discovered": 4,
00:16:53.597    "num_base_bdevs_operational": 4,
00:16:53.597    "base_bdevs_list": [
00:16:53.597      {
00:16:53.597        "name": "spare",
00:16:53.597        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:53.597        "is_configured": true,
00:16:53.597        "data_offset": 2048,
00:16:53.597        "data_size": 63488
00:16:53.597      },
00:16:53.597      {
00:16:53.597        "name": "BaseBdev2",
00:16:53.597        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:53.597        "is_configured": true,
00:16:53.597        "data_offset": 2048,
00:16:53.597        "data_size": 63488
00:16:53.597      },
00:16:53.597      {
00:16:53.597        "name": "BaseBdev3",
00:16:53.597        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:53.597        "is_configured": true,
00:16:53.597        "data_offset": 2048,
00:16:53.597        "data_size": 63488
00:16:53.597      },
00:16:53.597      {
00:16:53.597        "name": "BaseBdev4",
00:16:53.597        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:53.597        "is_configured": true,
00:16:53.597        "data_offset": 2048,
00:16:53.597        "data_size": 63488
00:16:53.597      }
00:16:53.597    ]
00:16:53.597  }'
00:16:53.597    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:53.856    "name": "raid_bdev1",
00:16:53.856    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:53.856    "strip_size_kb": 64,
00:16:53.856    "state": "online",
00:16:53.856    "raid_level": "raid5f",
00:16:53.856    "superblock": true,
00:16:53.856    "num_base_bdevs": 4,
00:16:53.856    "num_base_bdevs_discovered": 4,
00:16:53.856    "num_base_bdevs_operational": 4,
00:16:53.856    "base_bdevs_list": [
00:16:53.856      {
00:16:53.856        "name": "spare",
00:16:53.856        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:53.856        "is_configured": true,
00:16:53.856        "data_offset": 2048,
00:16:53.856        "data_size": 63488
00:16:53.856      },
00:16:53.856      {
00:16:53.856        "name": "BaseBdev2",
00:16:53.856        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:53.856        "is_configured": true,
00:16:53.856        "data_offset": 2048,
00:16:53.856        "data_size": 63488
00:16:53.856      },
00:16:53.856      {
00:16:53.856        "name": "BaseBdev3",
00:16:53.856        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:53.856        "is_configured": true,
00:16:53.856        "data_offset": 2048,
00:16:53.856        "data_size": 63488
00:16:53.856      },
00:16:53.856      {
00:16:53.856        "name": "BaseBdev4",
00:16:53.856        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:53.856        "is_configured": true,
00:16:53.856        "data_offset": 2048,
00:16:53.856        "data_size": 63488
00:16:53.856      }
00:16:53.856    ]
00:16:53.856  }'
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:53.856   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:16:53.856    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:54.116    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:54.116    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:54.116    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:54.116    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:54.116    11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:54.116    "name": "raid_bdev1",
00:16:54.116    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:54.116    "strip_size_kb": 64,
00:16:54.116    "state": "online",
00:16:54.116    "raid_level": "raid5f",
00:16:54.116    "superblock": true,
00:16:54.116    "num_base_bdevs": 4,
00:16:54.116    "num_base_bdevs_discovered": 4,
00:16:54.116    "num_base_bdevs_operational": 4,
00:16:54.116    "base_bdevs_list": [
00:16:54.116      {
00:16:54.116        "name": "spare",
00:16:54.116        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:54.116        "is_configured": true,
00:16:54.116        "data_offset": 2048,
00:16:54.116        "data_size": 63488
00:16:54.116      },
00:16:54.116      {
00:16:54.116        "name": "BaseBdev2",
00:16:54.116        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:54.116        "is_configured": true,
00:16:54.116        "data_offset": 2048,
00:16:54.116        "data_size": 63488
00:16:54.116      },
00:16:54.116      {
00:16:54.116        "name": "BaseBdev3",
00:16:54.116        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:54.116        "is_configured": true,
00:16:54.116        "data_offset": 2048,
00:16:54.116        "data_size": 63488
00:16:54.116      },
00:16:54.116      {
00:16:54.116        "name": "BaseBdev4",
00:16:54.116        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:54.116        "is_configured": true,
00:16:54.116        "data_offset": 2048,
00:16:54.116        "data_size": 63488
00:16:54.116      }
00:16:54.116    ]
00:16:54.116  }'
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:54.116   11:38:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:54.378  [2024-12-16 11:38:20.401768] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:54.378  [2024-12-16 11:38:20.401857] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:54.378  [2024-12-16 11:38:20.401998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:54.378  [2024-12-16 11:38:20.402131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:54.378  [2024-12-16 11:38:20.402206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:54.378    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length
00:16:54.378    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:54.378    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:54.378    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:54.378    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:16:54.378   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:16:54.640  /dev/nbd0
00:16:54.640    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:16:54.640  1+0 records in
00:16:54.640  1+0 records out
00:16:54.640  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229186 s, 17.9 MB/s
00:16:54.640    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:16:54.640   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:16:54.900  /dev/nbd1
00:16:54.900    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:16:54.900  1+0 records in
00:16:54.900  1+0 records out
00:16:54.900  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618572 s, 6.6 MB/s
00:16:54.900    11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:16:54.900   11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:16:55.160   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:16:55.160   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:16:55.160   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:16:55.160   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list
00:16:55.160   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i
00:16:55.160   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:16:55.160   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:16:55.420    11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:16:55.420   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:16:55.681    11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:55.681  [2024-12-16 11:38:21.522785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:16:55.681  [2024-12-16 11:38:21.522851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:55.681  [2024-12-16 11:38:21.522873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:16:55.681  [2024-12-16 11:38:21.522885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:55.681  [2024-12-16 11:38:21.525346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:55.681  spare
00:16:55.681  [2024-12-16 11:38:21.525440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:16:55.681  [2024-12-16 11:38:21.525553] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:16:55.681  [2024-12-16 11:38:21.525610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:55.681  [2024-12-16 11:38:21.525758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:55.681  [2024-12-16 11:38:21.525851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:55.681  [2024-12-16 11:38:21.525924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:55.681  [2024-12-16 11:38:21.625847] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:16:55.681  [2024-12-16 11:38:21.625937] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:16:55.681  [2024-12-16 11:38:21.626261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030
00:16:55.681  [2024-12-16 11:38:21.626813] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:16:55.681  [2024-12-16 11:38:21.626837] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:16:55.681  [2024-12-16 11:38:21.627013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:55.681    11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:55.681    11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:55.681    11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:55.681    11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:55.681    11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:55.681    "name": "raid_bdev1",
00:16:55.681    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:55.681    "strip_size_kb": 64,
00:16:55.681    "state": "online",
00:16:55.681    "raid_level": "raid5f",
00:16:55.681    "superblock": true,
00:16:55.681    "num_base_bdevs": 4,
00:16:55.681    "num_base_bdevs_discovered": 4,
00:16:55.681    "num_base_bdevs_operational": 4,
00:16:55.681    "base_bdevs_list": [
00:16:55.681      {
00:16:55.681        "name": "spare",
00:16:55.681        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:55.681        "is_configured": true,
00:16:55.681        "data_offset": 2048,
00:16:55.681        "data_size": 63488
00:16:55.681      },
00:16:55.681      {
00:16:55.681        "name": "BaseBdev2",
00:16:55.681        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:55.681        "is_configured": true,
00:16:55.681        "data_offset": 2048,
00:16:55.681        "data_size": 63488
00:16:55.681      },
00:16:55.681      {
00:16:55.681        "name": "BaseBdev3",
00:16:55.681        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:55.681        "is_configured": true,
00:16:55.681        "data_offset": 2048,
00:16:55.681        "data_size": 63488
00:16:55.681      },
00:16:55.681      {
00:16:55.681        "name": "BaseBdev4",
00:16:55.681        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:55.681        "is_configured": true,
00:16:55.681        "data_offset": 2048,
00:16:55.681        "data_size": 63488
00:16:55.681      }
00:16:55.681    ]
00:16:55.681  }'
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:55.681   11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:56.251   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:16:56.251   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:56.251   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:16:56.251   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:16:56.251   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:56.251    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:56.251    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:56.251    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:56.251    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:56.251    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:56.251   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:56.251    "name": "raid_bdev1",
00:16:56.251    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:56.251    "strip_size_kb": 64,
00:16:56.251    "state": "online",
00:16:56.251    "raid_level": "raid5f",
00:16:56.251    "superblock": true,
00:16:56.251    "num_base_bdevs": 4,
00:16:56.251    "num_base_bdevs_discovered": 4,
00:16:56.251    "num_base_bdevs_operational": 4,
00:16:56.251    "base_bdevs_list": [
00:16:56.251      {
00:16:56.251        "name": "spare",
00:16:56.251        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:56.251        "is_configured": true,
00:16:56.251        "data_offset": 2048,
00:16:56.251        "data_size": 63488
00:16:56.251      },
00:16:56.251      {
00:16:56.251        "name": "BaseBdev2",
00:16:56.251        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:56.251        "is_configured": true,
00:16:56.251        "data_offset": 2048,
00:16:56.251        "data_size": 63488
00:16:56.251      },
00:16:56.251      {
00:16:56.251        "name": "BaseBdev3",
00:16:56.251        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:56.251        "is_configured": true,
00:16:56.251        "data_offset": 2048,
00:16:56.251        "data_size": 63488
00:16:56.251      },
00:16:56.251      {
00:16:56.251        "name": "BaseBdev4",
00:16:56.251        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:56.252        "is_configured": true,
00:16:56.252        "data_offset": 2048,
00:16:56.252        "data_size": 63488
00:16:56.252      }
00:16:56.252    ]
00:16:56.252  }'
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:56.252  [2024-12-16 11:38:22.285921] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:56.252   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:56.252    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:56.512    11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:56.512   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:56.512    "name": "raid_bdev1",
00:16:56.512    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:56.512    "strip_size_kb": 64,
00:16:56.512    "state": "online",
00:16:56.512    "raid_level": "raid5f",
00:16:56.512    "superblock": true,
00:16:56.512    "num_base_bdevs": 4,
00:16:56.512    "num_base_bdevs_discovered": 3,
00:16:56.512    "num_base_bdevs_operational": 3,
00:16:56.512    "base_bdevs_list": [
00:16:56.512      {
00:16:56.512        "name": null,
00:16:56.512        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:56.512        "is_configured": false,
00:16:56.512        "data_offset": 0,
00:16:56.512        "data_size": 63488
00:16:56.512      },
00:16:56.512      {
00:16:56.512        "name": "BaseBdev2",
00:16:56.512        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:56.512        "is_configured": true,
00:16:56.512        "data_offset": 2048,
00:16:56.512        "data_size": 63488
00:16:56.512      },
00:16:56.512      {
00:16:56.512        "name": "BaseBdev3",
00:16:56.512        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:56.512        "is_configured": true,
00:16:56.512        "data_offset": 2048,
00:16:56.512        "data_size": 63488
00:16:56.512      },
00:16:56.512      {
00:16:56.512        "name": "BaseBdev4",
00:16:56.512        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:56.512        "is_configured": true,
00:16:56.512        "data_offset": 2048,
00:16:56.512        "data_size": 63488
00:16:56.512      }
00:16:56.512    ]
00:16:56.512  }'
00:16:56.512   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:56.512   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:56.771   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:16:56.771   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:56.771   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:56.771  [2024-12-16 11:38:22.745210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:56.771  [2024-12-16 11:38:22.745480] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:16:56.771  [2024-12-16 11:38:22.745560] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:16:56.771  [2024-12-16 11:38:22.745653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:56.771  [2024-12-16 11:38:22.748975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100
00:16:56.771  [2024-12-16 11:38:22.751285] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:16:56.771   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:56.771   11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1
00:16:57.711   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:57.711   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:57.711   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:57.711   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:57.711   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:57.711    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:57.711    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:57.711    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:57.711    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:57.971    "name": "raid_bdev1",
00:16:57.971    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:57.971    "strip_size_kb": 64,
00:16:57.971    "state": "online",
00:16:57.971    "raid_level": "raid5f",
00:16:57.971    "superblock": true,
00:16:57.971    "num_base_bdevs": 4,
00:16:57.971    "num_base_bdevs_discovered": 4,
00:16:57.971    "num_base_bdevs_operational": 4,
00:16:57.971    "process": {
00:16:57.971      "type": "rebuild",
00:16:57.971      "target": "spare",
00:16:57.971      "progress": {
00:16:57.971        "blocks": 19200,
00:16:57.971        "percent": 10
00:16:57.971      }
00:16:57.971    },
00:16:57.971    "base_bdevs_list": [
00:16:57.971      {
00:16:57.971        "name": "spare",
00:16:57.971        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:57.971        "is_configured": true,
00:16:57.971        "data_offset": 2048,
00:16:57.971        "data_size": 63488
00:16:57.971      },
00:16:57.971      {
00:16:57.971        "name": "BaseBdev2",
00:16:57.971        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:57.971        "is_configured": true,
00:16:57.971        "data_offset": 2048,
00:16:57.971        "data_size": 63488
00:16:57.971      },
00:16:57.971      {
00:16:57.971        "name": "BaseBdev3",
00:16:57.971        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:57.971        "is_configured": true,
00:16:57.971        "data_offset": 2048,
00:16:57.971        "data_size": 63488
00:16:57.971      },
00:16:57.971      {
00:16:57.971        "name": "BaseBdev4",
00:16:57.971        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:57.971        "is_configured": true,
00:16:57.971        "data_offset": 2048,
00:16:57.971        "data_size": 63488
00:16:57.971      }
00:16:57.971    ]
00:16:57.971  }'
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:57.971  [2024-12-16 11:38:23.914633] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:57.971  [2024-12-16 11:38:23.958219] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:16:57.971  [2024-12-16 11:38:23.958286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:57.971  [2024-12-16 11:38:23.958309] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:57.971  [2024-12-16 11:38:23.958318] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:57.971   11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:57.971    11:38:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:57.971   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:57.971    "name": "raid_bdev1",
00:16:57.971    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:57.971    "strip_size_kb": 64,
00:16:57.971    "state": "online",
00:16:57.971    "raid_level": "raid5f",
00:16:57.971    "superblock": true,
00:16:57.971    "num_base_bdevs": 4,
00:16:57.971    "num_base_bdevs_discovered": 3,
00:16:57.971    "num_base_bdevs_operational": 3,
00:16:57.971    "base_bdevs_list": [
00:16:57.971      {
00:16:57.971        "name": null,
00:16:57.971        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:57.971        "is_configured": false,
00:16:57.971        "data_offset": 0,
00:16:57.971        "data_size": 63488
00:16:57.972      },
00:16:57.972      {
00:16:57.972        "name": "BaseBdev2",
00:16:57.972        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:57.972        "is_configured": true,
00:16:57.972        "data_offset": 2048,
00:16:57.972        "data_size": 63488
00:16:57.972      },
00:16:57.972      {
00:16:57.972        "name": "BaseBdev3",
00:16:57.972        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:57.972        "is_configured": true,
00:16:57.972        "data_offset": 2048,
00:16:57.972        "data_size": 63488
00:16:57.972      },
00:16:57.972      {
00:16:57.972        "name": "BaseBdev4",
00:16:57.972        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:57.972        "is_configured": true,
00:16:57.972        "data_offset": 2048,
00:16:57.972        "data_size": 63488
00:16:57.972      }
00:16:57.972    ]
00:16:57.972  }'
00:16:57.972   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:57.972   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:58.540   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:16:58.540   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:58.540   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:58.540  [2024-12-16 11:38:24.430685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:16:58.540  [2024-12-16 11:38:24.430805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:58.540  [2024-12-16 11:38:24.430864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:16:58.540  [2024-12-16 11:38:24.430897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:58.540  [2024-12-16 11:38:24.431414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:58.540  [2024-12-16 11:38:24.431482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:16:58.540  [2024-12-16 11:38:24.431622] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:16:58.540  [2024-12-16 11:38:24.431670] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:16:58.540  [2024-12-16 11:38:24.431723] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:16:58.540  [2024-12-16 11:38:24.431787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:16:58.540  [2024-12-16 11:38:24.435175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0
00:16:58.540  spare
00:16:58.540   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:58.540   11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1
00:16:58.540  [2024-12-16 11:38:24.437749] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:16:59.476   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:16:59.476   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:16:59.476   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:16:59.476   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare
00:16:59.476   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:16:59.476    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:59.476    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:59.476    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:59.476    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:59.476    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:59.476   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:16:59.476    "name": "raid_bdev1",
00:16:59.476    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:59.476    "strip_size_kb": 64,
00:16:59.476    "state": "online",
00:16:59.476    "raid_level": "raid5f",
00:16:59.476    "superblock": true,
00:16:59.476    "num_base_bdevs": 4,
00:16:59.476    "num_base_bdevs_discovered": 4,
00:16:59.476    "num_base_bdevs_operational": 4,
00:16:59.476    "process": {
00:16:59.476      "type": "rebuild",
00:16:59.476      "target": "spare",
00:16:59.476      "progress": {
00:16:59.476        "blocks": 19200,
00:16:59.476        "percent": 10
00:16:59.476      }
00:16:59.476    },
00:16:59.476    "base_bdevs_list": [
00:16:59.476      {
00:16:59.476        "name": "spare",
00:16:59.476        "uuid": "d8f4e1a9-5a08-55fa-86cb-78513fbe3a1e",
00:16:59.476        "is_configured": true,
00:16:59.476        "data_offset": 2048,
00:16:59.476        "data_size": 63488
00:16:59.476      },
00:16:59.476      {
00:16:59.476        "name": "BaseBdev2",
00:16:59.476        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:59.476        "is_configured": true,
00:16:59.476        "data_offset": 2048,
00:16:59.476        "data_size": 63488
00:16:59.476      },
00:16:59.476      {
00:16:59.476        "name": "BaseBdev3",
00:16:59.476        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:59.476        "is_configured": true,
00:16:59.476        "data_offset": 2048,
00:16:59.476        "data_size": 63488
00:16:59.476      },
00:16:59.476      {
00:16:59.476        "name": "BaseBdev4",
00:16:59.476        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:59.476        "is_configured": true,
00:16:59.476        "data_offset": 2048,
00:16:59.476        "data_size": 63488
00:16:59.476      }
00:16:59.476    ]
00:16:59.476  }'
00:16:59.476    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:16:59.476   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:16:59.476    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:59.736  [2024-12-16 11:38:25.586146] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:59.736  [2024-12-16 11:38:25.645035] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:16:59.736  [2024-12-16 11:38:25.645122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:59.736  [2024-12-16 11:38:25.645143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:16:59.736  [2024-12-16 11:38:25.645154] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:16:59.736    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:16:59.736    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:16:59.736    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:16:59.736    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:59.736    11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:16:59.736    "name": "raid_bdev1",
00:16:59.736    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:16:59.736    "strip_size_kb": 64,
00:16:59.736    "state": "online",
00:16:59.736    "raid_level": "raid5f",
00:16:59.736    "superblock": true,
00:16:59.736    "num_base_bdevs": 4,
00:16:59.736    "num_base_bdevs_discovered": 3,
00:16:59.736    "num_base_bdevs_operational": 3,
00:16:59.736    "base_bdevs_list": [
00:16:59.736      {
00:16:59.736        "name": null,
00:16:59.736        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:59.736        "is_configured": false,
00:16:59.736        "data_offset": 0,
00:16:59.736        "data_size": 63488
00:16:59.736      },
00:16:59.736      {
00:16:59.736        "name": "BaseBdev2",
00:16:59.736        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:16:59.736        "is_configured": true,
00:16:59.736        "data_offset": 2048,
00:16:59.736        "data_size": 63488
00:16:59.736      },
00:16:59.736      {
00:16:59.736        "name": "BaseBdev3",
00:16:59.736        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:16:59.736        "is_configured": true,
00:16:59.736        "data_offset": 2048,
00:16:59.736        "data_size": 63488
00:16:59.736      },
00:16:59.736      {
00:16:59.736        "name": "BaseBdev4",
00:16:59.736        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:16:59.736        "is_configured": true,
00:16:59.736        "data_offset": 2048,
00:16:59.736        "data_size": 63488
00:16:59.736      }
00:16:59.736    ]
00:16:59.736  }'
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:16:59.736   11:38:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:00.305   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:00.305   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:00.305   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:00.305   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:00.305   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:00.305    11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:00.305    11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:00.305    11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:00.305    11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:00.305    11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:00.305   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:00.305    "name": "raid_bdev1",
00:17:00.305    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:17:00.305    "strip_size_kb": 64,
00:17:00.305    "state": "online",
00:17:00.305    "raid_level": "raid5f",
00:17:00.305    "superblock": true,
00:17:00.305    "num_base_bdevs": 4,
00:17:00.305    "num_base_bdevs_discovered": 3,
00:17:00.305    "num_base_bdevs_operational": 3,
00:17:00.305    "base_bdevs_list": [
00:17:00.305      {
00:17:00.305        "name": null,
00:17:00.305        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:00.305        "is_configured": false,
00:17:00.305        "data_offset": 0,
00:17:00.305        "data_size": 63488
00:17:00.305      },
00:17:00.305      {
00:17:00.305        "name": "BaseBdev2",
00:17:00.305        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:17:00.305        "is_configured": true,
00:17:00.305        "data_offset": 2048,
00:17:00.305        "data_size": 63488
00:17:00.305      },
00:17:00.305      {
00:17:00.305        "name": "BaseBdev3",
00:17:00.305        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:17:00.305        "is_configured": true,
00:17:00.305        "data_offset": 2048,
00:17:00.305        "data_size": 63488
00:17:00.305      },
00:17:00.305      {
00:17:00.305        "name": "BaseBdev4",
00:17:00.305        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:17:00.305        "is_configured": true,
00:17:00.305        "data_offset": 2048,
00:17:00.305        "data_size": 63488
00:17:00.305      }
00:17:00.305    ]
00:17:00.305  }'
00:17:00.305    11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:00.305   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:00.305    11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:00.306  [2024-12-16 11:38:26.265134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:17:00.306  [2024-12-16 11:38:26.265216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:00.306  [2024-12-16 11:38:26.265237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980
00:17:00.306  [2024-12-16 11:38:26.265249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:00.306  [2024-12-16 11:38:26.265732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:00.306  [2024-12-16 11:38:26.265756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:17:00.306  [2024-12-16 11:38:26.265834] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:17:00.306  [2024-12-16 11:38:26.265855] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:17:00.306  [2024-12-16 11:38:26.265863] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:17:00.306  [2024-12-16 11:38:26.265875] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:17:00.306  BaseBdev1
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:00.306   11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:01.244   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:01.244    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:01.244    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:01.244    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:01.244    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:01.244    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:01.504   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:01.504    "name": "raid_bdev1",
00:17:01.504    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:17:01.504    "strip_size_kb": 64,
00:17:01.504    "state": "online",
00:17:01.504    "raid_level": "raid5f",
00:17:01.504    "superblock": true,
00:17:01.504    "num_base_bdevs": 4,
00:17:01.504    "num_base_bdevs_discovered": 3,
00:17:01.504    "num_base_bdevs_operational": 3,
00:17:01.504    "base_bdevs_list": [
00:17:01.504      {
00:17:01.504        "name": null,
00:17:01.504        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:01.504        "is_configured": false,
00:17:01.504        "data_offset": 0,
00:17:01.504        "data_size": 63488
00:17:01.504      },
00:17:01.504      {
00:17:01.504        "name": "BaseBdev2",
00:17:01.504        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:17:01.504        "is_configured": true,
00:17:01.504        "data_offset": 2048,
00:17:01.504        "data_size": 63488
00:17:01.504      },
00:17:01.504      {
00:17:01.504        "name": "BaseBdev3",
00:17:01.504        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:17:01.504        "is_configured": true,
00:17:01.504        "data_offset": 2048,
00:17:01.504        "data_size": 63488
00:17:01.504      },
00:17:01.504      {
00:17:01.504        "name": "BaseBdev4",
00:17:01.504        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:17:01.504        "is_configured": true,
00:17:01.504        "data_offset": 2048,
00:17:01.504        "data_size": 63488
00:17:01.504      }
00:17:01.504    ]
00:17:01.504  }'
00:17:01.504   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:01.504   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:01.764   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:01.764   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:01.764   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:01.764   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:01.764   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:01.764    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:01.764    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:01.764    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:01.764    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:01.764    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:01.764   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:01.764    "name": "raid_bdev1",
00:17:01.764    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:17:01.764    "strip_size_kb": 64,
00:17:01.764    "state": "online",
00:17:01.764    "raid_level": "raid5f",
00:17:01.764    "superblock": true,
00:17:01.764    "num_base_bdevs": 4,
00:17:01.764    "num_base_bdevs_discovered": 3,
00:17:01.764    "num_base_bdevs_operational": 3,
00:17:01.764    "base_bdevs_list": [
00:17:01.764      {
00:17:01.764        "name": null,
00:17:01.764        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:01.764        "is_configured": false,
00:17:01.764        "data_offset": 0,
00:17:01.764        "data_size": 63488
00:17:01.764      },
00:17:01.764      {
00:17:01.764        "name": "BaseBdev2",
00:17:01.764        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:17:01.764        "is_configured": true,
00:17:01.764        "data_offset": 2048,
00:17:01.764        "data_size": 63488
00:17:01.764      },
00:17:01.764      {
00:17:01.764        "name": "BaseBdev3",
00:17:01.764        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:17:01.764        "is_configured": true,
00:17:01.764        "data_offset": 2048,
00:17:01.764        "data_size": 63488
00:17:01.764      },
00:17:01.764      {
00:17:01.764        "name": "BaseBdev4",
00:17:01.764        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:17:01.764        "is_configured": true,
00:17:01.764        "data_offset": 2048,
00:17:01.764        "data_size": 63488
00:17:01.764      }
00:17:01.764    ]
00:17:01.764  }'
00:17:01.764    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:01.764   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:01.764    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:02.024    11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:02.024  [2024-12-16 11:38:27.846728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:02.024  [2024-12-16 11:38:27.846888] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:17:02.024  [2024-12-16 11:38:27.846900] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:17:02.024  request:
00:17:02.024  {
00:17:02.024  "base_bdev": "BaseBdev1",
00:17:02.024  "raid_bdev": "raid_bdev1",
00:17:02.024  "method": "bdev_raid_add_base_bdev",
00:17:02.024  "req_id": 1
00:17:02.024  }
00:17:02.024  Got JSON-RPC error response
00:17:02.024  response:
00:17:02.024  {
00:17:02.024  "code": -22,
00:17:02.024  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:17:02.024  }
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:02.024   11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:02.963   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:02.963    11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:02.963    11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:02.964    11:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:02.964    11:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:02.964    11:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:02.964   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:02.964    "name": "raid_bdev1",
00:17:02.964    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:17:02.964    "strip_size_kb": 64,
00:17:02.964    "state": "online",
00:17:02.964    "raid_level": "raid5f",
00:17:02.964    "superblock": true,
00:17:02.964    "num_base_bdevs": 4,
00:17:02.964    "num_base_bdevs_discovered": 3,
00:17:02.964    "num_base_bdevs_operational": 3,
00:17:02.964    "base_bdevs_list": [
00:17:02.964      {
00:17:02.964        "name": null,
00:17:02.964        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:02.964        "is_configured": false,
00:17:02.964        "data_offset": 0,
00:17:02.964        "data_size": 63488
00:17:02.964      },
00:17:02.964      {
00:17:02.964        "name": "BaseBdev2",
00:17:02.964        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:17:02.964        "is_configured": true,
00:17:02.964        "data_offset": 2048,
00:17:02.964        "data_size": 63488
00:17:02.964      },
00:17:02.964      {
00:17:02.964        "name": "BaseBdev3",
00:17:02.964        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:17:02.964        "is_configured": true,
00:17:02.964        "data_offset": 2048,
00:17:02.964        "data_size": 63488
00:17:02.964      },
00:17:02.964      {
00:17:02.964        "name": "BaseBdev4",
00:17:02.964        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:17:02.964        "is_configured": true,
00:17:02.964        "data_offset": 2048,
00:17:02.964        "data_size": 63488
00:17:02.964      }
00:17:02.964    ]
00:17:02.964  }'
00:17:02.964   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:02.964   11:38:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:03.535    "name": "raid_bdev1",
00:17:03.535    "uuid": "e8603985-f8f9-4f5d-87eb-631d5fd7a94b",
00:17:03.535    "strip_size_kb": 64,
00:17:03.535    "state": "online",
00:17:03.535    "raid_level": "raid5f",
00:17:03.535    "superblock": true,
00:17:03.535    "num_base_bdevs": 4,
00:17:03.535    "num_base_bdevs_discovered": 3,
00:17:03.535    "num_base_bdevs_operational": 3,
00:17:03.535    "base_bdevs_list": [
00:17:03.535      {
00:17:03.535        "name": null,
00:17:03.535        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:03.535        "is_configured": false,
00:17:03.535        "data_offset": 0,
00:17:03.535        "data_size": 63488
00:17:03.535      },
00:17:03.535      {
00:17:03.535        "name": "BaseBdev2",
00:17:03.535        "uuid": "ee94e14b-f93d-5309-8653-88f776ab59fa",
00:17:03.535        "is_configured": true,
00:17:03.535        "data_offset": 2048,
00:17:03.535        "data_size": 63488
00:17:03.535      },
00:17:03.535      {
00:17:03.535        "name": "BaseBdev3",
00:17:03.535        "uuid": "f9d121ea-6b13-53b9-a880-8e89ac8ed9ee",
00:17:03.535        "is_configured": true,
00:17:03.535        "data_offset": 2048,
00:17:03.535        "data_size": 63488
00:17:03.535      },
00:17:03.535      {
00:17:03.535        "name": "BaseBdev4",
00:17:03.535        "uuid": "243a9142-a136-543f-aee6-16c5f57e6c0c",
00:17:03.535        "is_configured": true,
00:17:03.535        "data_offset": 2048,
00:17:03.535        "data_size": 63488
00:17:03.535      }
00:17:03.535    ]
00:17:03.535  }'
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95893
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95893 ']'
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95893
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:17:03.535    11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95893
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:17:03.535  killing process with pid 95893
00:17:03.535  Received shutdown signal, test time was about 60.000000 seconds
00:17:03.535  
00:17:03.535                                                                                                  Latency(us)
00:17:03.535  
[2024-12-16T11:38:29.602Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:03.535  
[2024-12-16T11:38:29.602Z]  ===================================================================================================================
00:17:03.535  
[2024-12-16T11:38:29.602Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95893'
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95893
00:17:03.535  [2024-12-16 11:38:29.517530] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:03.535  [2024-12-16 11:38:29.517673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:03.535   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95893
00:17:03.535  [2024-12-16 11:38:29.517757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:03.535  [2024-12-16 11:38:29.517768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:17:03.535  [2024-12-16 11:38:29.571632] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:03.795   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0
00:17:03.795  
00:17:03.795  real	0m25.265s
00:17:03.795  user	0m32.129s
00:17:03.795  sys	0m3.000s
00:17:03.795   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable
00:17:03.795  ************************************
00:17:03.795  END TEST raid5f_rebuild_test_sb
00:17:03.795  ************************************
00:17:03.795   11:38:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x
00:17:03.795   11:38:29 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096
00:17:03.795   11:38:29 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true
00:17:03.795   11:38:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:17:03.795   11:38:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:17:03.795   11:38:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:17:04.055  ************************************
00:17:04.055  START TEST raid_state_function_test_sb_4k
00:17:04.055  ************************************
00:17:04.055   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true
00:17:04.055   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:17:04.056    11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96691
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96691'
00:17:04.056  Process raid pid: 96691
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96691
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96691 ']'
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:04.056  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable
00:17:04.056   11:38:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:04.056  [2024-12-16 11:38:29.963527] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:17:04.056  [2024-12-16 11:38:29.963680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:04.315  [2024-12-16 11:38:30.125258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:04.315  [2024-12-16 11:38:30.171761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:17:04.316  [2024-12-16 11:38:30.213805] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:04.316  [2024-12-16 11:38:30.213840] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:04.885  [2024-12-16 11:38:30.834965] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:04.885  [2024-12-16 11:38:30.835022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:04.885  [2024-12-16 11:38:30.835034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:04.885  [2024-12-16 11:38:30.835060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:04.885    11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:04.885    11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:04.885    11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:04.885    11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:04.885    11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:04.885    "name": "Existed_Raid",
00:17:04.885    "uuid": "7754e4e8-9276-45e2-9fbb-ef8e161b023e",
00:17:04.885    "strip_size_kb": 0,
00:17:04.885    "state": "configuring",
00:17:04.885    "raid_level": "raid1",
00:17:04.885    "superblock": true,
00:17:04.885    "num_base_bdevs": 2,
00:17:04.885    "num_base_bdevs_discovered": 0,
00:17:04.885    "num_base_bdevs_operational": 2,
00:17:04.885    "base_bdevs_list": [
00:17:04.885      {
00:17:04.885        "name": "BaseBdev1",
00:17:04.885        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:04.885        "is_configured": false,
00:17:04.885        "data_offset": 0,
00:17:04.885        "data_size": 0
00:17:04.885      },
00:17:04.885      {
00:17:04.885        "name": "BaseBdev2",
00:17:04.885        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:04.885        "is_configured": false,
00:17:04.885        "data_offset": 0,
00:17:04.885        "data_size": 0
00:17:04.885      }
00:17:04.885    ]
00:17:04.885  }'
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:04.885   11:38:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:05.455  [2024-12-16 11:38:31.282135] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:05.455  [2024-12-16 11:38:31.282247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:05.455  [2024-12-16 11:38:31.294152] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:05.455  [2024-12-16 11:38:31.294236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:05.455  [2024-12-16 11:38:31.294263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:05.455  [2024-12-16 11:38:31.294286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:05.455  [2024-12-16 11:38:31.315049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:05.455  BaseBdev1
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:05.455   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:05.455  [
00:17:05.455  {
00:17:05.455  "name": "BaseBdev1",
00:17:05.455  "aliases": [
00:17:05.455  "5bf740f7-e0a3-4f41-8147-8cdbffe81375"
00:17:05.455  ],
00:17:05.455  "product_name": "Malloc disk",
00:17:05.455  "block_size": 4096,
00:17:05.455  "num_blocks": 8192,
00:17:05.455  "uuid": "5bf740f7-e0a3-4f41-8147-8cdbffe81375",
00:17:05.455  "assigned_rate_limits": {
00:17:05.455  "rw_ios_per_sec": 0,
00:17:05.455  "rw_mbytes_per_sec": 0,
00:17:05.455  "r_mbytes_per_sec": 0,
00:17:05.455  "w_mbytes_per_sec": 0
00:17:05.455  },
00:17:05.455  "claimed": true,
00:17:05.455  "claim_type": "exclusive_write",
00:17:05.455  "zoned": false,
00:17:05.455  "supported_io_types": {
00:17:05.455  "read": true,
00:17:05.455  "write": true,
00:17:05.455  "unmap": true,
00:17:05.455  "flush": true,
00:17:05.455  "reset": true,
00:17:05.455  "nvme_admin": false,
00:17:05.455  "nvme_io": false,
00:17:05.455  "nvme_io_md": false,
00:17:05.455  "write_zeroes": true,
00:17:05.455  "zcopy": true,
00:17:05.455  "get_zone_info": false,
00:17:05.455  "zone_management": false,
00:17:05.455  "zone_append": false,
00:17:05.455  "compare": false,
00:17:05.455  "compare_and_write": false,
00:17:05.455  "abort": true,
00:17:05.455  "seek_hole": false,
00:17:05.455  "seek_data": false,
00:17:05.455  "copy": true,
00:17:05.455  "nvme_iov_md": false
00:17:05.455  },
00:17:05.455  "memory_domains": [
00:17:05.455  {
00:17:05.455  "dma_device_id": "system",
00:17:05.455  "dma_device_type": 1
00:17:05.455  },
00:17:05.455  {
00:17:05.455  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:05.455  "dma_device_type": 2
00:17:05.455  }
00:17:05.455  ],
00:17:05.455  "driver_specific": {}
00:17:05.455  }
00:17:05.456  ]
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:05.456    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:05.456    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:05.456    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:05.456    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:05.456    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:05.456    "name": "Existed_Raid",
00:17:05.456    "uuid": "a08383c9-92ba-4048-9f25-501dcedf8be6",
00:17:05.456    "strip_size_kb": 0,
00:17:05.456    "state": "configuring",
00:17:05.456    "raid_level": "raid1",
00:17:05.456    "superblock": true,
00:17:05.456    "num_base_bdevs": 2,
00:17:05.456    "num_base_bdevs_discovered": 1,
00:17:05.456    "num_base_bdevs_operational": 2,
00:17:05.456    "base_bdevs_list": [
00:17:05.456      {
00:17:05.456        "name": "BaseBdev1",
00:17:05.456        "uuid": "5bf740f7-e0a3-4f41-8147-8cdbffe81375",
00:17:05.456        "is_configured": true,
00:17:05.456        "data_offset": 256,
00:17:05.456        "data_size": 7936
00:17:05.456      },
00:17:05.456      {
00:17:05.456        "name": "BaseBdev2",
00:17:05.456        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.456        "is_configured": false,
00:17:05.456        "data_offset": 0,
00:17:05.456        "data_size": 0
00:17:05.456      }
00:17:05.456    ]
00:17:05.456  }'
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:05.456   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.024  [2024-12-16 11:38:31.822266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:06.024  [2024-12-16 11:38:31.822334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.024  [2024-12-16 11:38:31.830277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:06.024  [2024-12-16 11:38:31.832304] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:06.024  [2024-12-16 11:38:31.832417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:06.024    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:06.024    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:06.024    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.024    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.024    11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:06.024    "name": "Existed_Raid",
00:17:06.024    "uuid": "0225265a-1053-4748-a2cb-3752d341005b",
00:17:06.024    "strip_size_kb": 0,
00:17:06.024    "state": "configuring",
00:17:06.024    "raid_level": "raid1",
00:17:06.024    "superblock": true,
00:17:06.024    "num_base_bdevs": 2,
00:17:06.024    "num_base_bdevs_discovered": 1,
00:17:06.024    "num_base_bdevs_operational": 2,
00:17:06.024    "base_bdevs_list": [
00:17:06.024      {
00:17:06.024        "name": "BaseBdev1",
00:17:06.024        "uuid": "5bf740f7-e0a3-4f41-8147-8cdbffe81375",
00:17:06.024        "is_configured": true,
00:17:06.024        "data_offset": 256,
00:17:06.024        "data_size": 7936
00:17:06.024      },
00:17:06.024      {
00:17:06.024        "name": "BaseBdev2",
00:17:06.024        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:06.024        "is_configured": false,
00:17:06.024        "data_offset": 0,
00:17:06.024        "data_size": 0
00:17:06.024      }
00:17:06.024    ]
00:17:06.024  }'
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:06.024   11:38:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.284  [2024-12-16 11:38:32.259157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:06.284  [2024-12-16 11:38:32.259524] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:17:06.284  [2024-12-16 11:38:32.259613] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:06.284  [2024-12-16 11:38:32.259994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:17:06.284  BaseBdev2
00:17:06.284  [2024-12-16 11:38:32.260233] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:17:06.284  [2024-12-16 11:38:32.260259] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:17:06.284  [2024-12-16 11:38:32.260419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.284  [
00:17:06.284  {
00:17:06.284  "name": "BaseBdev2",
00:17:06.284  "aliases": [
00:17:06.284  "90aab2b1-c78e-4937-9bbc-4f7497b18824"
00:17:06.284  ],
00:17:06.284  "product_name": "Malloc disk",
00:17:06.284  "block_size": 4096,
00:17:06.284  "num_blocks": 8192,
00:17:06.284  "uuid": "90aab2b1-c78e-4937-9bbc-4f7497b18824",
00:17:06.284  "assigned_rate_limits": {
00:17:06.284  "rw_ios_per_sec": 0,
00:17:06.284  "rw_mbytes_per_sec": 0,
00:17:06.284  "r_mbytes_per_sec": 0,
00:17:06.284  "w_mbytes_per_sec": 0
00:17:06.284  },
00:17:06.284  "claimed": true,
00:17:06.284  "claim_type": "exclusive_write",
00:17:06.284  "zoned": false,
00:17:06.284  "supported_io_types": {
00:17:06.284  "read": true,
00:17:06.284  "write": true,
00:17:06.284  "unmap": true,
00:17:06.284  "flush": true,
00:17:06.284  "reset": true,
00:17:06.284  "nvme_admin": false,
00:17:06.284  "nvme_io": false,
00:17:06.284  "nvme_io_md": false,
00:17:06.284  "write_zeroes": true,
00:17:06.284  "zcopy": true,
00:17:06.284  "get_zone_info": false,
00:17:06.284  "zone_management": false,
00:17:06.284  "zone_append": false,
00:17:06.284  "compare": false,
00:17:06.284  "compare_and_write": false,
00:17:06.284  "abort": true,
00:17:06.284  "seek_hole": false,
00:17:06.284  "seek_data": false,
00:17:06.284  "copy": true,
00:17:06.284  "nvme_iov_md": false
00:17:06.284  },
00:17:06.284  "memory_domains": [
00:17:06.284  {
00:17:06.284  "dma_device_id": "system",
00:17:06.284  "dma_device_type": 1
00:17:06.284  },
00:17:06.284  {
00:17:06.284  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:06.284  "dma_device_type": 2
00:17:06.284  }
00:17:06.284  ],
00:17:06.284  "driver_specific": {}
00:17:06.284  }
00:17:06.284  ]
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:06.284   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:06.284    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:06.284    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:06.284    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.284    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.284    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.544   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:06.544    "name": "Existed_Raid",
00:17:06.544    "uuid": "0225265a-1053-4748-a2cb-3752d341005b",
00:17:06.544    "strip_size_kb": 0,
00:17:06.544    "state": "online",
00:17:06.544    "raid_level": "raid1",
00:17:06.544    "superblock": true,
00:17:06.544    "num_base_bdevs": 2,
00:17:06.544    "num_base_bdevs_discovered": 2,
00:17:06.544    "num_base_bdevs_operational": 2,
00:17:06.544    "base_bdevs_list": [
00:17:06.544      {
00:17:06.544        "name": "BaseBdev1",
00:17:06.544        "uuid": "5bf740f7-e0a3-4f41-8147-8cdbffe81375",
00:17:06.544        "is_configured": true,
00:17:06.544        "data_offset": 256,
00:17:06.544        "data_size": 7936
00:17:06.544      },
00:17:06.544      {
00:17:06.544        "name": "BaseBdev2",
00:17:06.544        "uuid": "90aab2b1-c78e-4937-9bbc-4f7497b18824",
00:17:06.544        "is_configured": true,
00:17:06.544        "data_offset": 256,
00:17:06.544        "data_size": 7936
00:17:06.544      }
00:17:06.544    ]
00:17:06.544  }'
00:17:06.544   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:06.544   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.803   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:17:06.803   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:17:06.803   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:17:06.803   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:17:06.803   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name
00:17:06.803   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:17:06.803    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:17:06.803    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:17:06.803    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.803    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.803  [2024-12-16 11:38:32.746714] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:06.803    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:06.803   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:17:06.803    "name": "Existed_Raid",
00:17:06.803    "aliases": [
00:17:06.803      "0225265a-1053-4748-a2cb-3752d341005b"
00:17:06.803    ],
00:17:06.803    "product_name": "Raid Volume",
00:17:06.803    "block_size": 4096,
00:17:06.803    "num_blocks": 7936,
00:17:06.803    "uuid": "0225265a-1053-4748-a2cb-3752d341005b",
00:17:06.803    "assigned_rate_limits": {
00:17:06.803      "rw_ios_per_sec": 0,
00:17:06.803      "rw_mbytes_per_sec": 0,
00:17:06.803      "r_mbytes_per_sec": 0,
00:17:06.803      "w_mbytes_per_sec": 0
00:17:06.803    },
00:17:06.803    "claimed": false,
00:17:06.803    "zoned": false,
00:17:06.803    "supported_io_types": {
00:17:06.803      "read": true,
00:17:06.803      "write": true,
00:17:06.803      "unmap": false,
00:17:06.803      "flush": false,
00:17:06.803      "reset": true,
00:17:06.803      "nvme_admin": false,
00:17:06.803      "nvme_io": false,
00:17:06.803      "nvme_io_md": false,
00:17:06.803      "write_zeroes": true,
00:17:06.803      "zcopy": false,
00:17:06.803      "get_zone_info": false,
00:17:06.803      "zone_management": false,
00:17:06.803      "zone_append": false,
00:17:06.803      "compare": false,
00:17:06.804      "compare_and_write": false,
00:17:06.804      "abort": false,
00:17:06.804      "seek_hole": false,
00:17:06.804      "seek_data": false,
00:17:06.804      "copy": false,
00:17:06.804      "nvme_iov_md": false
00:17:06.804    },
00:17:06.804    "memory_domains": [
00:17:06.804      {
00:17:06.804        "dma_device_id": "system",
00:17:06.804        "dma_device_type": 1
00:17:06.804      },
00:17:06.804      {
00:17:06.804        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:06.804        "dma_device_type": 2
00:17:06.804      },
00:17:06.804      {
00:17:06.804        "dma_device_id": "system",
00:17:06.804        "dma_device_type": 1
00:17:06.804      },
00:17:06.804      {
00:17:06.804        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:06.804        "dma_device_type": 2
00:17:06.804      }
00:17:06.804    ],
00:17:06.804    "driver_specific": {
00:17:06.804      "raid": {
00:17:06.804        "uuid": "0225265a-1053-4748-a2cb-3752d341005b",
00:17:06.804        "strip_size_kb": 0,
00:17:06.804        "state": "online",
00:17:06.804        "raid_level": "raid1",
00:17:06.804        "superblock": true,
00:17:06.804        "num_base_bdevs": 2,
00:17:06.804        "num_base_bdevs_discovered": 2,
00:17:06.804        "num_base_bdevs_operational": 2,
00:17:06.804        "base_bdevs_list": [
00:17:06.804          {
00:17:06.804            "name": "BaseBdev1",
00:17:06.804            "uuid": "5bf740f7-e0a3-4f41-8147-8cdbffe81375",
00:17:06.804            "is_configured": true,
00:17:06.804            "data_offset": 256,
00:17:06.804            "data_size": 7936
00:17:06.804          },
00:17:06.804          {
00:17:06.804            "name": "BaseBdev2",
00:17:06.804            "uuid": "90aab2b1-c78e-4937-9bbc-4f7497b18824",
00:17:06.804            "is_configured": true,
00:17:06.804            "data_offset": 256,
00:17:06.804            "data_size": 7936
00:17:06.804          }
00:17:06.804        ]
00:17:06.804      }
00:17:06.804    }
00:17:06.804  }'
00:17:06.804    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:17:06.804   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:17:06.804  BaseBdev2'
00:17:06.804    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:06.804   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096   '
00:17:06.804   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:06.804    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:17:06.804    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:06.804    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:06.804    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:07.062    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096   '
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096    == \4\0\9\6\ \ \  ]]
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:07.062    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:17:07.062    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:07.062    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:07.062    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.062    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096   '
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096    == \4\0\9\6\ \ \  ]]
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.062  [2024-12-16 11:38:32.950095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:07.062   11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:07.063    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:07.063    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:07.063    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:07.063    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.063    11:38:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:07.063   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:07.063    "name": "Existed_Raid",
00:17:07.063    "uuid": "0225265a-1053-4748-a2cb-3752d341005b",
00:17:07.063    "strip_size_kb": 0,
00:17:07.063    "state": "online",
00:17:07.063    "raid_level": "raid1",
00:17:07.063    "superblock": true,
00:17:07.063    "num_base_bdevs": 2,
00:17:07.063    "num_base_bdevs_discovered": 1,
00:17:07.063    "num_base_bdevs_operational": 1,
00:17:07.063    "base_bdevs_list": [
00:17:07.063      {
00:17:07.063        "name": null,
00:17:07.063        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:07.063        "is_configured": false,
00:17:07.063        "data_offset": 0,
00:17:07.063        "data_size": 7936
00:17:07.063      },
00:17:07.063      {
00:17:07.063        "name": "BaseBdev2",
00:17:07.063        "uuid": "90aab2b1-c78e-4937-9bbc-4f7497b18824",
00:17:07.063        "is_configured": true,
00:17:07.063        "data_offset": 256,
00:17:07.063        "data_size": 7936
00:17:07.063      }
00:17:07.063    ]
00:17:07.063  }'
00:17:07.063   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:07.063   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.633  [2024-12-16 11:38:33.514123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:07.633  [2024-12-16 11:38:33.514313] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:07.633  [2024-12-16 11:38:33.526034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:07.633  [2024-12-16 11:38:33.526087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:07.633  [2024-12-16 11:38:33.526099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96691
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96691 ']'
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96691
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:17:07.633    11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96691
00:17:07.633  killing process with pid 96691
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96691'
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96691
00:17:07.633  [2024-12-16 11:38:33.616160] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:07.633   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96691
00:17:07.633  [2024-12-16 11:38:33.617161] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:07.894   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0
00:17:07.894  
00:17:07.894  real	0m3.992s
00:17:07.894  user	0m6.248s
00:17:07.894  sys	0m0.859s
00:17:07.894   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable
00:17:07.894  ************************************
00:17:07.894  END TEST raid_state_function_test_sb_4k
00:17:07.894  ************************************
00:17:07.894   11:38:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:07.894   11:38:33 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2
00:17:07.894   11:38:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:17:07.894   11:38:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:17:07.894   11:38:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:17:07.894  ************************************
00:17:07.894  START TEST raid_superblock_test_4k
00:17:07.894  ************************************
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']'
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96927
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96927
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96927 ']'
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100
00:17:07.894  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable
00:17:07.894   11:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:08.154  [2024-12-16 11:38:34.027979] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:17:08.154  [2024-12-16 11:38:34.028151] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96927 ]
00:17:08.154  [2024-12-16 11:38:34.188227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:08.414  [2024-12-16 11:38:34.234098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:17:08.414  [2024-12-16 11:38:34.276504] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:08.414  [2024-12-16 11:38:34.276637] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:08.985  malloc1
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:08.985  [2024-12-16 11:38:34.890560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:08.985  [2024-12-16 11:38:34.890692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:08.985  [2024-12-16 11:38:34.890738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:17:08.985  [2024-12-16 11:38:34.890777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:08.985  [2024-12-16 11:38:34.892898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:08.985  [2024-12-16 11:38:34.892975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:08.985  pt1
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:08.985  malloc2
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:08.985  [2024-12-16 11:38:34.934198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:08.985  [2024-12-16 11:38:34.934325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:08.985  [2024-12-16 11:38:34.934353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:17:08.985  [2024-12-16 11:38:34.934367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:08.985  [2024-12-16 11:38:34.937214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:08.985  [2024-12-16 11:38:34.937264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:08.985  pt2
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:08.985  [2024-12-16 11:38:34.946201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:08.985  [2024-12-16 11:38:34.948061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:08.985  [2024-12-16 11:38:34.948197] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:17:08.985  [2024-12-16 11:38:34.948212] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:08.985  [2024-12-16 11:38:34.948455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:17:08.985  [2024-12-16 11:38:34.948614] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:17:08.985  [2024-12-16 11:38:34.948626] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:17:08.985  [2024-12-16 11:38:34.948749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:08.985   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:08.986    11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:08.986    11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:08.986    11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:08.986    11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:08.986    11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:08.986    "name": "raid_bdev1",
00:17:08.986    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:08.986    "strip_size_kb": 0,
00:17:08.986    "state": "online",
00:17:08.986    "raid_level": "raid1",
00:17:08.986    "superblock": true,
00:17:08.986    "num_base_bdevs": 2,
00:17:08.986    "num_base_bdevs_discovered": 2,
00:17:08.986    "num_base_bdevs_operational": 2,
00:17:08.986    "base_bdevs_list": [
00:17:08.986      {
00:17:08.986        "name": "pt1",
00:17:08.986        "uuid": "00000000-0000-0000-0000-000000000001",
00:17:08.986        "is_configured": true,
00:17:08.986        "data_offset": 256,
00:17:08.986        "data_size": 7936
00:17:08.986      },
00:17:08.986      {
00:17:08.986        "name": "pt2",
00:17:08.986        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:08.986        "is_configured": true,
00:17:08.986        "data_offset": 256,
00:17:08.986        "data_size": 7936
00:17:08.986      }
00:17:08.986    ]
00:17:08.986  }'
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:08.986   11:38:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.560  [2024-12-16 11:38:35.373758] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:17:09.560    "name": "raid_bdev1",
00:17:09.560    "aliases": [
00:17:09.560      "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45"
00:17:09.560    ],
00:17:09.560    "product_name": "Raid Volume",
00:17:09.560    "block_size": 4096,
00:17:09.560    "num_blocks": 7936,
00:17:09.560    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:09.560    "assigned_rate_limits": {
00:17:09.560      "rw_ios_per_sec": 0,
00:17:09.560      "rw_mbytes_per_sec": 0,
00:17:09.560      "r_mbytes_per_sec": 0,
00:17:09.560      "w_mbytes_per_sec": 0
00:17:09.560    },
00:17:09.560    "claimed": false,
00:17:09.560    "zoned": false,
00:17:09.560    "supported_io_types": {
00:17:09.560      "read": true,
00:17:09.560      "write": true,
00:17:09.560      "unmap": false,
00:17:09.560      "flush": false,
00:17:09.560      "reset": true,
00:17:09.560      "nvme_admin": false,
00:17:09.560      "nvme_io": false,
00:17:09.560      "nvme_io_md": false,
00:17:09.560      "write_zeroes": true,
00:17:09.560      "zcopy": false,
00:17:09.560      "get_zone_info": false,
00:17:09.560      "zone_management": false,
00:17:09.560      "zone_append": false,
00:17:09.560      "compare": false,
00:17:09.560      "compare_and_write": false,
00:17:09.560      "abort": false,
00:17:09.560      "seek_hole": false,
00:17:09.560      "seek_data": false,
00:17:09.560      "copy": false,
00:17:09.560      "nvme_iov_md": false
00:17:09.560    },
00:17:09.560    "memory_domains": [
00:17:09.560      {
00:17:09.560        "dma_device_id": "system",
00:17:09.560        "dma_device_type": 1
00:17:09.560      },
00:17:09.560      {
00:17:09.560        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:09.560        "dma_device_type": 2
00:17:09.560      },
00:17:09.560      {
00:17:09.560        "dma_device_id": "system",
00:17:09.560        "dma_device_type": 1
00:17:09.560      },
00:17:09.560      {
00:17:09.560        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:09.560        "dma_device_type": 2
00:17:09.560      }
00:17:09.560    ],
00:17:09.560    "driver_specific": {
00:17:09.560      "raid": {
00:17:09.560        "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:09.560        "strip_size_kb": 0,
00:17:09.560        "state": "online",
00:17:09.560        "raid_level": "raid1",
00:17:09.560        "superblock": true,
00:17:09.560        "num_base_bdevs": 2,
00:17:09.560        "num_base_bdevs_discovered": 2,
00:17:09.560        "num_base_bdevs_operational": 2,
00:17:09.560        "base_bdevs_list": [
00:17:09.560          {
00:17:09.560            "name": "pt1",
00:17:09.560            "uuid": "00000000-0000-0000-0000-000000000001",
00:17:09.560            "is_configured": true,
00:17:09.560            "data_offset": 256,
00:17:09.560            "data_size": 7936
00:17:09.560          },
00:17:09.560          {
00:17:09.560            "name": "pt2",
00:17:09.560            "uuid": "00000000-0000-0000-0000-000000000002",
00:17:09.560            "is_configured": true,
00:17:09.560            "data_offset": 256,
00:17:09.560            "data_size": 7936
00:17:09.560          }
00:17:09.560        ]
00:17:09.560      }
00:17:09.560    }
00:17:09.560  }'
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:17:09.560  pt2'
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096   '
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096   '
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096    == \4\0\9\6\ \ \  ]]
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096   '
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096    == \4\0\9\6\ \ \  ]]
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.560  [2024-12-16 11:38:35.581347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:09.560    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8eaf65b7-ee69-483a-a0c8-bbad51fa0b45
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 8eaf65b7-ee69-483a-a0c8-bbad51fa0b45 ']'
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.560   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.560  [2024-12-16 11:38:35.620979] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:09.560  [2024-12-16 11:38:35.621009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:09.560  [2024-12-16 11:38:35.621083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:09.560  [2024-12-16 11:38:35.621155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:09.560  [2024-12-16 11:38:35.621166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821  [2024-12-16 11:38:35.740840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:17:09.821  [2024-12-16 11:38:35.742876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:17:09.821  [2024-12-16 11:38:35.742960] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:17:09.821  [2024-12-16 11:38:35.743018] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:17:09.821  [2024-12-16 11:38:35.743038] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:09.821  [2024-12-16 11:38:35.743049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:17:09.821  request:
00:17:09.821  {
00:17:09.821  "name": "raid_bdev1",
00:17:09.821  "raid_level": "raid1",
00:17:09.821  "base_bdevs": [
00:17:09.821  "malloc1",
00:17:09.821  "malloc2"
00:17:09.821  ],
00:17:09.821  "superblock": false,
00:17:09.821  "method": "bdev_raid_create",
00:17:09.821  "req_id": 1
00:17:09.821  }
00:17:09.821  Got JSON-RPC error response
00:17:09.821  response:
00:17:09.821  {
00:17:09.821  "code": -17,
00:17:09.821  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:17:09.821  }
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821  [2024-12-16 11:38:35.800685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:09.821  [2024-12-16 11:38:35.800787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:09.821  [2024-12-16 11:38:35.800840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:17:09.821  [2024-12-16 11:38:35.800875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:09.821  [2024-12-16 11:38:35.803022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:09.821  [2024-12-16 11:38:35.803094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:09.821  [2024-12-16 11:38:35.803191] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:17:09.821  [2024-12-16 11:38:35.803267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:09.821  pt1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:09.821    11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.821   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:09.821    "name": "raid_bdev1",
00:17:09.821    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:09.821    "strip_size_kb": 0,
00:17:09.821    "state": "configuring",
00:17:09.821    "raid_level": "raid1",
00:17:09.821    "superblock": true,
00:17:09.821    "num_base_bdevs": 2,
00:17:09.821    "num_base_bdevs_discovered": 1,
00:17:09.821    "num_base_bdevs_operational": 2,
00:17:09.821    "base_bdevs_list": [
00:17:09.821      {
00:17:09.821        "name": "pt1",
00:17:09.821        "uuid": "00000000-0000-0000-0000-000000000001",
00:17:09.821        "is_configured": true,
00:17:09.821        "data_offset": 256,
00:17:09.821        "data_size": 7936
00:17:09.821      },
00:17:09.821      {
00:17:09.821        "name": null,
00:17:09.822        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:09.822        "is_configured": false,
00:17:09.822        "data_offset": 256,
00:17:09.822        "data_size": 7936
00:17:09.822      }
00:17:09.822    ]
00:17:09.822  }'
00:17:09.822   11:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:09.822   11:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']'
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.392  [2024-12-16 11:38:36.271928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:10.392  [2024-12-16 11:38:36.272004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:10.392  [2024-12-16 11:38:36.272032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:17:10.392  [2024-12-16 11:38:36.272043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:10.392  [2024-12-16 11:38:36.272499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:10.392  [2024-12-16 11:38:36.272517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:10.392  [2024-12-16 11:38:36.272617] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:17:10.392  [2024-12-16 11:38:36.272642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:10.392  [2024-12-16 11:38:36.272741] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:17:10.392  [2024-12-16 11:38:36.272756] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:10.392  [2024-12-16 11:38:36.273011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:17:10.392  [2024-12-16 11:38:36.273144] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:17:10.392  [2024-12-16 11:38:36.273171] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:17:10.392  [2024-12-16 11:38:36.273288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:10.392  pt2
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:10.392    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:10.392    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:10.392    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.392    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.392    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:10.392    "name": "raid_bdev1",
00:17:10.392    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:10.392    "strip_size_kb": 0,
00:17:10.392    "state": "online",
00:17:10.392    "raid_level": "raid1",
00:17:10.392    "superblock": true,
00:17:10.392    "num_base_bdevs": 2,
00:17:10.392    "num_base_bdevs_discovered": 2,
00:17:10.392    "num_base_bdevs_operational": 2,
00:17:10.392    "base_bdevs_list": [
00:17:10.392      {
00:17:10.392        "name": "pt1",
00:17:10.392        "uuid": "00000000-0000-0000-0000-000000000001",
00:17:10.392        "is_configured": true,
00:17:10.392        "data_offset": 256,
00:17:10.392        "data_size": 7936
00:17:10.392      },
00:17:10.392      {
00:17:10.392        "name": "pt2",
00:17:10.392        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:10.392        "is_configured": true,
00:17:10.392        "data_offset": 256,
00:17:10.392        "data_size": 7936
00:17:10.392      }
00:17:10.392    ]
00:17:10.392  }'
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:10.392   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.653   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:17:10.653   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:17:10.653   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:17:10.653   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:17:10.653   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name
00:17:10.653   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:17:10.653    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:10.653    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.653    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:17:10.653    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.653  [2024-12-16 11:38:36.687577] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:10.653    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.653   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:17:10.653    "name": "raid_bdev1",
00:17:10.653    "aliases": [
00:17:10.653      "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45"
00:17:10.653    ],
00:17:10.653    "product_name": "Raid Volume",
00:17:10.653    "block_size": 4096,
00:17:10.653    "num_blocks": 7936,
00:17:10.653    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:10.653    "assigned_rate_limits": {
00:17:10.653      "rw_ios_per_sec": 0,
00:17:10.653      "rw_mbytes_per_sec": 0,
00:17:10.653      "r_mbytes_per_sec": 0,
00:17:10.653      "w_mbytes_per_sec": 0
00:17:10.653    },
00:17:10.653    "claimed": false,
00:17:10.653    "zoned": false,
00:17:10.653    "supported_io_types": {
00:17:10.653      "read": true,
00:17:10.653      "write": true,
00:17:10.653      "unmap": false,
00:17:10.653      "flush": false,
00:17:10.653      "reset": true,
00:17:10.653      "nvme_admin": false,
00:17:10.653      "nvme_io": false,
00:17:10.653      "nvme_io_md": false,
00:17:10.653      "write_zeroes": true,
00:17:10.653      "zcopy": false,
00:17:10.653      "get_zone_info": false,
00:17:10.653      "zone_management": false,
00:17:10.653      "zone_append": false,
00:17:10.653      "compare": false,
00:17:10.653      "compare_and_write": false,
00:17:10.653      "abort": false,
00:17:10.653      "seek_hole": false,
00:17:10.653      "seek_data": false,
00:17:10.653      "copy": false,
00:17:10.653      "nvme_iov_md": false
00:17:10.653    },
00:17:10.653    "memory_domains": [
00:17:10.653      {
00:17:10.653        "dma_device_id": "system",
00:17:10.653        "dma_device_type": 1
00:17:10.653      },
00:17:10.653      {
00:17:10.653        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:10.653        "dma_device_type": 2
00:17:10.653      },
00:17:10.653      {
00:17:10.653        "dma_device_id": "system",
00:17:10.653        "dma_device_type": 1
00:17:10.653      },
00:17:10.653      {
00:17:10.653        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:10.653        "dma_device_type": 2
00:17:10.653      }
00:17:10.653    ],
00:17:10.653    "driver_specific": {
00:17:10.653      "raid": {
00:17:10.653        "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:10.653        "strip_size_kb": 0,
00:17:10.653        "state": "online",
00:17:10.653        "raid_level": "raid1",
00:17:10.653        "superblock": true,
00:17:10.653        "num_base_bdevs": 2,
00:17:10.653        "num_base_bdevs_discovered": 2,
00:17:10.653        "num_base_bdevs_operational": 2,
00:17:10.653        "base_bdevs_list": [
00:17:10.653          {
00:17:10.653            "name": "pt1",
00:17:10.653            "uuid": "00000000-0000-0000-0000-000000000001",
00:17:10.653            "is_configured": true,
00:17:10.653            "data_offset": 256,
00:17:10.653            "data_size": 7936
00:17:10.653          },
00:17:10.653          {
00:17:10.653            "name": "pt2",
00:17:10.653            "uuid": "00000000-0000-0000-0000-000000000002",
00:17:10.653            "is_configured": true,
00:17:10.653            "data_offset": 256,
00:17:10.653            "data_size": 7936
00:17:10.653          }
00:17:10.653        ]
00:17:10.653      }
00:17:10.653    }
00:17:10.653  }'
00:17:10.653    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:17:10.914  pt2'
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096   '
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096   '
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096    == \4\0\9\6\ \ \  ]]
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096   '
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096    == \4\0\9\6\ \ \  ]]
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:17:10.914  [2024-12-16 11:38:36.847257] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 8eaf65b7-ee69-483a-a0c8-bbad51fa0b45 '!=' 8eaf65b7-ee69-483a-a0c8-bbad51fa0b45 ']'
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.914  [2024-12-16 11:38:36.894931] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:10.914    11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:10.914    "name": "raid_bdev1",
00:17:10.914    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:10.914    "strip_size_kb": 0,
00:17:10.914    "state": "online",
00:17:10.914    "raid_level": "raid1",
00:17:10.914    "superblock": true,
00:17:10.914    "num_base_bdevs": 2,
00:17:10.914    "num_base_bdevs_discovered": 1,
00:17:10.914    "num_base_bdevs_operational": 1,
00:17:10.914    "base_bdevs_list": [
00:17:10.914      {
00:17:10.914        "name": null,
00:17:10.914        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:10.914        "is_configured": false,
00:17:10.914        "data_offset": 0,
00:17:10.914        "data_size": 7936
00:17:10.914      },
00:17:10.914      {
00:17:10.914        "name": "pt2",
00:17:10.914        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:10.914        "is_configured": true,
00:17:10.914        "data_offset": 256,
00:17:10.914        "data_size": 7936
00:17:10.914      }
00:17:10.914    ]
00:17:10.914  }'
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:10.914   11:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:11.485  [2024-12-16 11:38:37.318184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:11.485  [2024-12-16 11:38:37.318279] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:11.485  [2024-12-16 11:38:37.318398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:11.485  [2024-12-16 11:38:37.318483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:11.485  [2024-12-16 11:38:37.318547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:11.485    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:17:11.485    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:11.485    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:11.485    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:11.485    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:17:11.485   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:11.486  [2024-12-16 11:38:37.390056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:11.486  [2024-12-16 11:38:37.390154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:11.486  [2024-12-16 11:38:37.390191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:17:11.486  [2024-12-16 11:38:37.390234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:11.486  [2024-12-16 11:38:37.392672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:11.486  [2024-12-16 11:38:37.392748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:11.486  [2024-12-16 11:38:37.392867] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:17:11.486  [2024-12-16 11:38:37.392934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:11.486  [2024-12-16 11:38:37.393054] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:17:11.486  [2024-12-16 11:38:37.393094] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:11.486  [2024-12-16 11:38:37.393354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:17:11.486  [2024-12-16 11:38:37.393536] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:17:11.486  [2024-12-16 11:38:37.393597] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:17:11.486  [2024-12-16 11:38:37.393739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:11.486  pt2
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:11.486    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:11.486    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:11.486    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:11.486    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:11.486    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:11.486    "name": "raid_bdev1",
00:17:11.486    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:11.486    "strip_size_kb": 0,
00:17:11.486    "state": "online",
00:17:11.486    "raid_level": "raid1",
00:17:11.486    "superblock": true,
00:17:11.486    "num_base_bdevs": 2,
00:17:11.486    "num_base_bdevs_discovered": 1,
00:17:11.486    "num_base_bdevs_operational": 1,
00:17:11.486    "base_bdevs_list": [
00:17:11.486      {
00:17:11.486        "name": null,
00:17:11.486        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:11.486        "is_configured": false,
00:17:11.486        "data_offset": 256,
00:17:11.486        "data_size": 7936
00:17:11.486      },
00:17:11.486      {
00:17:11.486        "name": "pt2",
00:17:11.486        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:11.486        "is_configured": true,
00:17:11.486        "data_offset": 256,
00:17:11.486        "data_size": 7936
00:17:11.486      }
00:17:11.486    ]
00:17:11.486  }'
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:11.486   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.057  [2024-12-16 11:38:37.829363] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:12.057  [2024-12-16 11:38:37.829451] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:12.057  [2024-12-16 11:38:37.829595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:12.057  [2024-12-16 11:38:37.829659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:12.057  [2024-12-16 11:38:37.829673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']'
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.057  [2024-12-16 11:38:37.893175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:12.057  [2024-12-16 11:38:37.893283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:12.057  [2024-12-16 11:38:37.893324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:17:12.057  [2024-12-16 11:38:37.893366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:12.057  [2024-12-16 11:38:37.895562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:12.057  [2024-12-16 11:38:37.895638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:12.057  [2024-12-16 11:38:37.895734] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:17:12.057  [2024-12-16 11:38:37.895804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:12.057  [2024-12-16 11:38:37.895958] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:17:12.057  [2024-12-16 11:38:37.896017] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:12.057  [2024-12-16 11:38:37.896056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:17:12.057  [2024-12-16 11:38:37.896139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:12.057  [2024-12-16 11:38:37.896245] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:17:12.057  [2024-12-16 11:38:37.896284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:12.057  [2024-12-16 11:38:37.896531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:17:12.057  [2024-12-16 11:38:37.896707] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:17:12.057  [2024-12-16 11:38:37.896748] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:17:12.057  [2024-12-16 11:38:37.896898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:12.057  pt1
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']'
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:12.057    11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:12.057    "name": "raid_bdev1",
00:17:12.057    "uuid": "8eaf65b7-ee69-483a-a0c8-bbad51fa0b45",
00:17:12.057    "strip_size_kb": 0,
00:17:12.057    "state": "online",
00:17:12.057    "raid_level": "raid1",
00:17:12.057    "superblock": true,
00:17:12.057    "num_base_bdevs": 2,
00:17:12.057    "num_base_bdevs_discovered": 1,
00:17:12.057    "num_base_bdevs_operational": 1,
00:17:12.057    "base_bdevs_list": [
00:17:12.057      {
00:17:12.057        "name": null,
00:17:12.057        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:12.057        "is_configured": false,
00:17:12.057        "data_offset": 256,
00:17:12.057        "data_size": 7936
00:17:12.057      },
00:17:12.057      {
00:17:12.057        "name": "pt2",
00:17:12.057        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:12.057        "is_configured": true,
00:17:12.057        "data_offset": 256,
00:17:12.057        "data_size": 7936
00:17:12.057      }
00:17:12.057    ]
00:17:12.057  }'
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:12.057   11:38:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.317    11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:17:12.317    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:12.317    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.317    11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:17:12.317    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:17:12.577    11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:17:12.577    11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:12.577    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:12.577    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.577  [2024-12-16 11:38:38.424684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:12.577    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 8eaf65b7-ee69-483a-a0c8-bbad51fa0b45 '!=' 8eaf65b7-ee69-483a-a0c8-bbad51fa0b45 ']'
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96927
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96927 ']'
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96927
00:17:12.577    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:17:12.577    11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96927
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:17:12.577  killing process with pid 96927
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96927'
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96927
00:17:12.577  [2024-12-16 11:38:38.509511] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:12.577  [2024-12-16 11:38:38.509634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:12.577  [2024-12-16 11:38:38.509703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:12.577  [2024-12-16 11:38:38.509717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:17:12.577   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96927
00:17:12.577  [2024-12-16 11:38:38.534634] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:12.837   11:38:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0
00:17:12.837  
00:17:12.837  real	0m4.854s
00:17:12.837  user	0m7.858s
00:17:12.837  sys	0m1.040s
00:17:12.837   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable
00:17:12.837   11:38:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x
00:17:12.837  ************************************
00:17:12.837  END TEST raid_superblock_test_4k
00:17:12.837  ************************************
00:17:12.837   11:38:38 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']'
00:17:12.837   11:38:38 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true
00:17:12.837   11:38:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:17:12.837   11:38:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:17:12.837   11:38:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:17:12.837  ************************************
00:17:12.837  START TEST raid_rebuild_test_sb_4k
00:17:12.837  ************************************
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:17:12.837    11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97244
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97244
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97244 ']'
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:12.837  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable
00:17:12.837   11:38:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.098  [2024-12-16 11:38:38.958083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:17:13.098  [2024-12-16 11:38:38.958309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97244 ]
00:17:13.098  I/O size of 3145728 is greater than zero copy threshold (65536).
00:17:13.098  Zero copy mechanism will not be used.
00:17:13.098  [2024-12-16 11:38:39.118310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:13.360  [2024-12-16 11:38:39.165517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:17:13.360  [2024-12-16 11:38:39.208910] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:13.360  [2024-12-16 11:38:39.209027] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.930  BaseBdev1_malloc
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.930  [2024-12-16 11:38:39.883680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:17:13.930  [2024-12-16 11:38:39.883750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:13.930  [2024-12-16 11:38:39.883779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:17:13.930  [2024-12-16 11:38:39.883795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:13.930  [2024-12-16 11:38:39.886249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:13.930  [2024-12-16 11:38:39.886293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:17:13.930  BaseBdev1
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.930  BaseBdev2_malloc
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.930  [2024-12-16 11:38:39.919804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:17:13.930  [2024-12-16 11:38:39.919875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:13.930  [2024-12-16 11:38:39.919903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:17:13.930  [2024-12-16 11:38:39.919915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:13.930  [2024-12-16 11:38:39.922763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:13.930  [2024-12-16 11:38:39.922874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:17:13.930  BaseBdev2
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.930  spare_malloc
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.930  spare_delay
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.930  [2024-12-16 11:38:39.956610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:17:13.930  [2024-12-16 11:38:39.956685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:13.930  [2024-12-16 11:38:39.956723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:17:13.930  [2024-12-16 11:38:39.956732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:13.930  [2024-12-16 11:38:39.958871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:13.930  [2024-12-16 11:38:39.958907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:17:13.930  spare
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1
00:17:13.930   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.931  [2024-12-16 11:38:39.964639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:13.931  [2024-12-16 11:38:39.966614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:13.931  [2024-12-16 11:38:39.966772] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:17:13.931  [2024-12-16 11:38:39.966785] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:13.931  [2024-12-16 11:38:39.967046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:17:13.931  [2024-12-16 11:38:39.967174] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:17:13.931  [2024-12-16 11:38:39.967187] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:17:13.931  [2024-12-16 11:38:39.967322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:13.931   11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:13.931    11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:13.931    11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:13.931    11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:13.931    11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:13.931    11:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:14.191   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:14.191    "name": "raid_bdev1",
00:17:14.191    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:14.191    "strip_size_kb": 0,
00:17:14.191    "state": "online",
00:17:14.191    "raid_level": "raid1",
00:17:14.191    "superblock": true,
00:17:14.191    "num_base_bdevs": 2,
00:17:14.191    "num_base_bdevs_discovered": 2,
00:17:14.191    "num_base_bdevs_operational": 2,
00:17:14.191    "base_bdevs_list": [
00:17:14.191      {
00:17:14.191        "name": "BaseBdev1",
00:17:14.191        "uuid": "86158940-2870-54aa-a284-ac524636a80e",
00:17:14.191        "is_configured": true,
00:17:14.191        "data_offset": 256,
00:17:14.191        "data_size": 7936
00:17:14.191      },
00:17:14.191      {
00:17:14.191        "name": "BaseBdev2",
00:17:14.191        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:14.191        "is_configured": true,
00:17:14.191        "data_offset": 256,
00:17:14.191        "data_size": 7936
00:17:14.191      }
00:17:14.191    ]
00:17:14.191  }'
00:17:14.191   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:14.191   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:14.451    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:14.451    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:17:14.451    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:14.451    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:14.451  [2024-12-16 11:38:40.480030] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:14.451    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:14.451   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936
00:17:14.712    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:14.712    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:17:14.712    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:14.712    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:14.712    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:17:14.712   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:17:14.712  [2024-12-16 11:38:40.755509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:17:14.712  /dev/nbd0
00:17:14.972    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:17:14.972  1+0 records in
00:17:14.972  1+0 records out
00:17:14.972  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542069 s, 7.6 MB/s
00:17:14.972    11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']'
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1
00:17:14.972   11:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct
00:17:15.542  7936+0 records in
00:17:15.542  7936+0 records out
00:17:15.542  32505856 bytes (33 MB, 31 MiB) copied, 0.655926 s, 49.6 MB/s
00:17:15.542   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:17:15.542   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:17:15.542   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:17:15.542   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list
00:17:15.542   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i
00:17:15.542   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:17:15.542   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:17:15.802    11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:17:15.802  [2024-12-16 11:38:41.713594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:15.802  [2024-12-16 11:38:41.731030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:15.802   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:15.802    11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:15.802    11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:15.803    11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:15.803    11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:15.803    11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:15.803   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:15.803    "name": "raid_bdev1",
00:17:15.803    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:15.803    "strip_size_kb": 0,
00:17:15.803    "state": "online",
00:17:15.803    "raid_level": "raid1",
00:17:15.803    "superblock": true,
00:17:15.803    "num_base_bdevs": 2,
00:17:15.803    "num_base_bdevs_discovered": 1,
00:17:15.803    "num_base_bdevs_operational": 1,
00:17:15.803    "base_bdevs_list": [
00:17:15.803      {
00:17:15.803        "name": null,
00:17:15.803        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:15.803        "is_configured": false,
00:17:15.803        "data_offset": 0,
00:17:15.803        "data_size": 7936
00:17:15.803      },
00:17:15.803      {
00:17:15.803        "name": "BaseBdev2",
00:17:15.803        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:15.803        "is_configured": true,
00:17:15.803        "data_offset": 256,
00:17:15.803        "data_size": 7936
00:17:15.803      }
00:17:15.803    ]
00:17:15.803  }'
00:17:15.803   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:15.803   11:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:16.374   11:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:17:16.374   11:38:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:16.374   11:38:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:16.374  [2024-12-16 11:38:42.174316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:16.374  [2024-12-16 11:38:42.178701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0
00:17:16.374   11:38:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:16.374   11:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1
00:17:16.374  [2024-12-16 11:38:42.180779] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:17.320    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:17.320    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:17.320    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:17.320    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:17.320    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:17.320    "name": "raid_bdev1",
00:17:17.320    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:17.320    "strip_size_kb": 0,
00:17:17.320    "state": "online",
00:17:17.320    "raid_level": "raid1",
00:17:17.320    "superblock": true,
00:17:17.320    "num_base_bdevs": 2,
00:17:17.320    "num_base_bdevs_discovered": 2,
00:17:17.320    "num_base_bdevs_operational": 2,
00:17:17.320    "process": {
00:17:17.320      "type": "rebuild",
00:17:17.320      "target": "spare",
00:17:17.320      "progress": {
00:17:17.320        "blocks": 2560,
00:17:17.320        "percent": 32
00:17:17.320      }
00:17:17.320    },
00:17:17.320    "base_bdevs_list": [
00:17:17.320      {
00:17:17.320        "name": "spare",
00:17:17.320        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:17.320        "is_configured": true,
00:17:17.320        "data_offset": 256,
00:17:17.320        "data_size": 7936
00:17:17.320      },
00:17:17.320      {
00:17:17.320        "name": "BaseBdev2",
00:17:17.320        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:17.320        "is_configured": true,
00:17:17.320        "data_offset": 256,
00:17:17.320        "data_size": 7936
00:17:17.320      }
00:17:17.320    ]
00:17:17.320  }'
00:17:17.320    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:17.320    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:17.320   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:17.320  [2024-12-16 11:38:43.329906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:17.580  [2024-12-16 11:38:43.385811] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:17:17.580  [2024-12-16 11:38:43.385879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:17.580  [2024-12-16 11:38:43.385901] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:17.580  [2024-12-16 11:38:43.385910] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:17.580    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:17.580    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:17.580    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:17.580    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:17.580    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:17.580   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:17.580    "name": "raid_bdev1",
00:17:17.580    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:17.580    "strip_size_kb": 0,
00:17:17.580    "state": "online",
00:17:17.580    "raid_level": "raid1",
00:17:17.580    "superblock": true,
00:17:17.580    "num_base_bdevs": 2,
00:17:17.581    "num_base_bdevs_discovered": 1,
00:17:17.581    "num_base_bdevs_operational": 1,
00:17:17.581    "base_bdevs_list": [
00:17:17.581      {
00:17:17.581        "name": null,
00:17:17.581        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:17.581        "is_configured": false,
00:17:17.581        "data_offset": 0,
00:17:17.581        "data_size": 7936
00:17:17.581      },
00:17:17.581      {
00:17:17.581        "name": "BaseBdev2",
00:17:17.581        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:17.581        "is_configured": true,
00:17:17.581        "data_offset": 256,
00:17:17.581        "data_size": 7936
00:17:17.581      }
00:17:17.581    ]
00:17:17.581  }'
00:17:17.581   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:17.581   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:17.841   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:17.841   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:17.841   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:17.841   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:17.841   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:17.841    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:17.841    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:17.841    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:17.841    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:17.841    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:17.841   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:17.841    "name": "raid_bdev1",
00:17:17.841    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:17.841    "strip_size_kb": 0,
00:17:17.841    "state": "online",
00:17:17.841    "raid_level": "raid1",
00:17:17.841    "superblock": true,
00:17:17.841    "num_base_bdevs": 2,
00:17:17.841    "num_base_bdevs_discovered": 1,
00:17:17.841    "num_base_bdevs_operational": 1,
00:17:17.841    "base_bdevs_list": [
00:17:17.841      {
00:17:17.841        "name": null,
00:17:17.841        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:17.841        "is_configured": false,
00:17:17.841        "data_offset": 0,
00:17:17.841        "data_size": 7936
00:17:17.841      },
00:17:17.841      {
00:17:17.841        "name": "BaseBdev2",
00:17:17.842        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:17.842        "is_configured": true,
00:17:17.842        "data_offset": 256,
00:17:17.842        "data_size": 7936
00:17:17.842      }
00:17:17.842    ]
00:17:17.842  }'
00:17:17.842    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:17.842   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:18.101    11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:18.101   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:18.101   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:17:18.101   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:18.101   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:18.101  [2024-12-16 11:38:43.965575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:18.101  [2024-12-16 11:38:43.969842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190
00:17:18.101   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:18.101   11:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1
00:17:18.101  [2024-12-16 11:38:43.971792] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:19.041   11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:19.041   11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:19.041   11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:19.041   11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:19.041   11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:19.041    11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:19.041    11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:19.041    11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:19.041    11:38:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:19.041    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:19.041   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:19.041    "name": "raid_bdev1",
00:17:19.041    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:19.041    "strip_size_kb": 0,
00:17:19.041    "state": "online",
00:17:19.041    "raid_level": "raid1",
00:17:19.041    "superblock": true,
00:17:19.041    "num_base_bdevs": 2,
00:17:19.041    "num_base_bdevs_discovered": 2,
00:17:19.041    "num_base_bdevs_operational": 2,
00:17:19.041    "process": {
00:17:19.041      "type": "rebuild",
00:17:19.041      "target": "spare",
00:17:19.041      "progress": {
00:17:19.041        "blocks": 2560,
00:17:19.041        "percent": 32
00:17:19.041      }
00:17:19.041    },
00:17:19.041    "base_bdevs_list": [
00:17:19.041      {
00:17:19.041        "name": "spare",
00:17:19.041        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:19.041        "is_configured": true,
00:17:19.041        "data_offset": 256,
00:17:19.041        "data_size": 7936
00:17:19.041      },
00:17:19.041      {
00:17:19.041        "name": "BaseBdev2",
00:17:19.041        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:19.041        "is_configured": true,
00:17:19.041        "data_offset": 256,
00:17:19.041        "data_size": 7936
00:17:19.041      }
00:17:19.041    ]
00:17:19.041  }'
00:17:19.041    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:19.041   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:19.041    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:19.301   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:19.301   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:17:19.302  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']'
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=578
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:19.302    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:19.302    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:19.302    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:19.302    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:19.302    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:19.302    "name": "raid_bdev1",
00:17:19.302    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:19.302    "strip_size_kb": 0,
00:17:19.302    "state": "online",
00:17:19.302    "raid_level": "raid1",
00:17:19.302    "superblock": true,
00:17:19.302    "num_base_bdevs": 2,
00:17:19.302    "num_base_bdevs_discovered": 2,
00:17:19.302    "num_base_bdevs_operational": 2,
00:17:19.302    "process": {
00:17:19.302      "type": "rebuild",
00:17:19.302      "target": "spare",
00:17:19.302      "progress": {
00:17:19.302        "blocks": 2816,
00:17:19.302        "percent": 35
00:17:19.302      }
00:17:19.302    },
00:17:19.302    "base_bdevs_list": [
00:17:19.302      {
00:17:19.302        "name": "spare",
00:17:19.302        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:19.302        "is_configured": true,
00:17:19.302        "data_offset": 256,
00:17:19.302        "data_size": 7936
00:17:19.302      },
00:17:19.302      {
00:17:19.302        "name": "BaseBdev2",
00:17:19.302        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:19.302        "is_configured": true,
00:17:19.302        "data_offset": 256,
00:17:19.302        "data_size": 7936
00:17:19.302      }
00:17:19.302    ]
00:17:19.302  }'
00:17:19.302    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:19.302    11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:19.302   11:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1
00:17:20.241   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:17:20.241   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:20.241   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:20.241   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:20.241   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:20.241   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:20.241    11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:20.241    11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:20.241    11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:20.241    11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:20.241    11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:20.500   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:20.500    "name": "raid_bdev1",
00:17:20.500    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:20.500    "strip_size_kb": 0,
00:17:20.500    "state": "online",
00:17:20.500    "raid_level": "raid1",
00:17:20.500    "superblock": true,
00:17:20.500    "num_base_bdevs": 2,
00:17:20.500    "num_base_bdevs_discovered": 2,
00:17:20.500    "num_base_bdevs_operational": 2,
00:17:20.500    "process": {
00:17:20.500      "type": "rebuild",
00:17:20.500      "target": "spare",
00:17:20.500      "progress": {
00:17:20.500        "blocks": 5888,
00:17:20.500        "percent": 74
00:17:20.500      }
00:17:20.500    },
00:17:20.500    "base_bdevs_list": [
00:17:20.500      {
00:17:20.500        "name": "spare",
00:17:20.500        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:20.500        "is_configured": true,
00:17:20.500        "data_offset": 256,
00:17:20.500        "data_size": 7936
00:17:20.500      },
00:17:20.500      {
00:17:20.500        "name": "BaseBdev2",
00:17:20.500        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:20.500        "is_configured": true,
00:17:20.500        "data_offset": 256,
00:17:20.500        "data_size": 7936
00:17:20.500      }
00:17:20.500    ]
00:17:20.500  }'
00:17:20.500    11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:20.500   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:20.500    11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:20.500   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:20.500   11:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1
00:17:21.066  [2024-12-16 11:38:47.083503] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:17:21.066  [2024-12-16 11:38:47.083676] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:17:21.066  [2024-12-16 11:38:47.083842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:21.632   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:17:21.632   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:21.632   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:21.632   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:21.632   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:21.632   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:21.632    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:21.632    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:21.632    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:21.633    "name": "raid_bdev1",
00:17:21.633    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:21.633    "strip_size_kb": 0,
00:17:21.633    "state": "online",
00:17:21.633    "raid_level": "raid1",
00:17:21.633    "superblock": true,
00:17:21.633    "num_base_bdevs": 2,
00:17:21.633    "num_base_bdevs_discovered": 2,
00:17:21.633    "num_base_bdevs_operational": 2,
00:17:21.633    "base_bdevs_list": [
00:17:21.633      {
00:17:21.633        "name": "spare",
00:17:21.633        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:21.633        "is_configured": true,
00:17:21.633        "data_offset": 256,
00:17:21.633        "data_size": 7936
00:17:21.633      },
00:17:21.633      {
00:17:21.633        "name": "BaseBdev2",
00:17:21.633        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:21.633        "is_configured": true,
00:17:21.633        "data_offset": 256,
00:17:21.633        "data_size": 7936
00:17:21.633      }
00:17:21.633    ]
00:17:21.633  }'
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:21.633    "name": "raid_bdev1",
00:17:21.633    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:21.633    "strip_size_kb": 0,
00:17:21.633    "state": "online",
00:17:21.633    "raid_level": "raid1",
00:17:21.633    "superblock": true,
00:17:21.633    "num_base_bdevs": 2,
00:17:21.633    "num_base_bdevs_discovered": 2,
00:17:21.633    "num_base_bdevs_operational": 2,
00:17:21.633    "base_bdevs_list": [
00:17:21.633      {
00:17:21.633        "name": "spare",
00:17:21.633        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:21.633        "is_configured": true,
00:17:21.633        "data_offset": 256,
00:17:21.633        "data_size": 7936
00:17:21.633      },
00:17:21.633      {
00:17:21.633        "name": "BaseBdev2",
00:17:21.633        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:21.633        "is_configured": true,
00:17:21.633        "data_offset": 256,
00:17:21.633        "data_size": 7936
00:17:21.633      }
00:17:21.633    ]
00:17:21.633  }'
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:21.633    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:21.633   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:21.893    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:21.893    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:21.893    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:21.893    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:21.893    11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:21.893   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:21.893    "name": "raid_bdev1",
00:17:21.893    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:21.893    "strip_size_kb": 0,
00:17:21.893    "state": "online",
00:17:21.893    "raid_level": "raid1",
00:17:21.893    "superblock": true,
00:17:21.893    "num_base_bdevs": 2,
00:17:21.893    "num_base_bdevs_discovered": 2,
00:17:21.893    "num_base_bdevs_operational": 2,
00:17:21.893    "base_bdevs_list": [
00:17:21.893      {
00:17:21.893        "name": "spare",
00:17:21.893        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:21.893        "is_configured": true,
00:17:21.893        "data_offset": 256,
00:17:21.893        "data_size": 7936
00:17:21.893      },
00:17:21.893      {
00:17:21.893        "name": "BaseBdev2",
00:17:21.893        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:21.893        "is_configured": true,
00:17:21.893        "data_offset": 256,
00:17:21.893        "data_size": 7936
00:17:21.893      }
00:17:21.893    ]
00:17:21.893  }'
00:17:21.893   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:21.893   11:38:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:22.153  [2024-12-16 11:38:48.102711] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:22.153  [2024-12-16 11:38:48.102744] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:22.153  [2024-12-16 11:38:48.102835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:22.153  [2024-12-16 11:38:48.102901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:22.153  [2024-12-16 11:38:48.102917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:22.153    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:22.153    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:22.153    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:22.153    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length
00:17:22.153    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:17:22.153   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:17:22.413  /dev/nbd0
00:17:22.413    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:17:22.413  1+0 records in
00:17:22.413  1+0 records out
00:17:22.413  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197397 s, 20.8 MB/s
00:17:22.413    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:17:22.413   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:17:22.673  /dev/nbd1
00:17:22.673    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:17:22.673  1+0 records in
00:17:22.673  1+0 records out
00:17:22.673  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420641 s, 9.7 MB/s
00:17:22.673    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:17:22.673   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:17:22.932    11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:17:22.932   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:17:22.933   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:17:22.933   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:17:22.933   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break
00:17:22.933   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0
00:17:22.933   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:17:22.933   11:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:17:23.192    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.192  [2024-12-16 11:38:49.230827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:17:23.192  [2024-12-16 11:38:49.230882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:23.192  [2024-12-16 11:38:49.230902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:17:23.192  [2024-12-16 11:38:49.230915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:23.192  [2024-12-16 11:38:49.233163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:23.192  [2024-12-16 11:38:49.233238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:17:23.192  [2024-12-16 11:38:49.233340] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:17:23.192  [2024-12-16 11:38:49.233429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:23.192  [2024-12-16 11:38:49.233585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:23.192  spare
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.192   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.452  [2024-12-16 11:38:49.333526] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:17:23.452  [2024-12-16 11:38:49.333620] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:23.452  [2024-12-16 11:38:49.333900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0
00:17:23.452  [2024-12-16 11:38:49.334042] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:17:23.452  [2024-12-16 11:38:49.334054] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:17:23.452  [2024-12-16 11:38:49.334188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:23.452    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:23.452    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:23.452    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.452    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.452    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:23.452    "name": "raid_bdev1",
00:17:23.452    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:23.452    "strip_size_kb": 0,
00:17:23.452    "state": "online",
00:17:23.452    "raid_level": "raid1",
00:17:23.452    "superblock": true,
00:17:23.452    "num_base_bdevs": 2,
00:17:23.452    "num_base_bdevs_discovered": 2,
00:17:23.452    "num_base_bdevs_operational": 2,
00:17:23.452    "base_bdevs_list": [
00:17:23.452      {
00:17:23.452        "name": "spare",
00:17:23.452        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:23.452        "is_configured": true,
00:17:23.452        "data_offset": 256,
00:17:23.452        "data_size": 7936
00:17:23.452      },
00:17:23.452      {
00:17:23.452        "name": "BaseBdev2",
00:17:23.452        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:23.452        "is_configured": true,
00:17:23.452        "data_offset": 256,
00:17:23.452        "data_size": 7936
00:17:23.452      }
00:17:23.452    ]
00:17:23.452  }'
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:23.452   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.711   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:23.711   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:23.711   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:23.711   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:23.711   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:23.711    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:23.711    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:23.711    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.711    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:23.970    "name": "raid_bdev1",
00:17:23.970    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:23.970    "strip_size_kb": 0,
00:17:23.970    "state": "online",
00:17:23.970    "raid_level": "raid1",
00:17:23.970    "superblock": true,
00:17:23.970    "num_base_bdevs": 2,
00:17:23.970    "num_base_bdevs_discovered": 2,
00:17:23.970    "num_base_bdevs_operational": 2,
00:17:23.970    "base_bdevs_list": [
00:17:23.970      {
00:17:23.970        "name": "spare",
00:17:23.970        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:23.970        "is_configured": true,
00:17:23.970        "data_offset": 256,
00:17:23.970        "data_size": 7936
00:17:23.970      },
00:17:23.970      {
00:17:23.970        "name": "BaseBdev2",
00:17:23.970        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:23.970        "is_configured": true,
00:17:23.970        "data_offset": 256,
00:17:23.970        "data_size": 7936
00:17:23.970      }
00:17:23.970    ]
00:17:23.970  }'
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.970  [2024-12-16 11:38:49.965661] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:23.970   11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:23.970    11:38:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:23.970   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:23.970    "name": "raid_bdev1",
00:17:23.970    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:23.970    "strip_size_kb": 0,
00:17:23.970    "state": "online",
00:17:23.970    "raid_level": "raid1",
00:17:23.970    "superblock": true,
00:17:23.970    "num_base_bdevs": 2,
00:17:23.970    "num_base_bdevs_discovered": 1,
00:17:23.970    "num_base_bdevs_operational": 1,
00:17:23.970    "base_bdevs_list": [
00:17:23.970      {
00:17:23.970        "name": null,
00:17:23.970        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:23.970        "is_configured": false,
00:17:23.970        "data_offset": 0,
00:17:23.970        "data_size": 7936
00:17:23.970      },
00:17:23.970      {
00:17:23.970        "name": "BaseBdev2",
00:17:23.970        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:23.970        "is_configured": true,
00:17:23.970        "data_offset": 256,
00:17:23.970        "data_size": 7936
00:17:23.970      }
00:17:23.970    ]
00:17:23.970  }'
00:17:23.970   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:23.970   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:24.538   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:17:24.538   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:24.538   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:24.538  [2024-12-16 11:38:50.400955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:24.538  [2024-12-16 11:38:50.401208] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:17:24.538  [2024-12-16 11:38:50.401226] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:17:24.538  [2024-12-16 11:38:50.401266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:24.538  [2024-12-16 11:38:50.405271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80
00:17:24.538   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:24.538   11:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1
00:17:24.538  [2024-12-16 11:38:50.407232] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:25.478   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:25.478   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:25.478   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:25.478   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:25.478   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:25.478    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:25.478    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:25.478    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:25.478    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:25.478    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:25.478   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:25.478    "name": "raid_bdev1",
00:17:25.478    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:25.478    "strip_size_kb": 0,
00:17:25.478    "state": "online",
00:17:25.478    "raid_level": "raid1",
00:17:25.478    "superblock": true,
00:17:25.478    "num_base_bdevs": 2,
00:17:25.478    "num_base_bdevs_discovered": 2,
00:17:25.478    "num_base_bdevs_operational": 2,
00:17:25.478    "process": {
00:17:25.478      "type": "rebuild",
00:17:25.478      "target": "spare",
00:17:25.478      "progress": {
00:17:25.478        "blocks": 2560,
00:17:25.478        "percent": 32
00:17:25.478      }
00:17:25.478    },
00:17:25.478    "base_bdevs_list": [
00:17:25.478      {
00:17:25.478        "name": "spare",
00:17:25.478        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:25.478        "is_configured": true,
00:17:25.478        "data_offset": 256,
00:17:25.478        "data_size": 7936
00:17:25.478      },
00:17:25.478      {
00:17:25.478        "name": "BaseBdev2",
00:17:25.478        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:25.478        "is_configured": true,
00:17:25.478        "data_offset": 256,
00:17:25.478        "data_size": 7936
00:17:25.478      }
00:17:25.478    ]
00:17:25.478  }'
00:17:25.478    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:25.478   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:25.478    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:25.738  [2024-12-16 11:38:51.576035] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:25.738  [2024-12-16 11:38:51.611915] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:17:25.738  [2024-12-16 11:38:51.611967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:25.738  [2024-12-16 11:38:51.611985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:25.738  [2024-12-16 11:38:51.611993] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:25.738    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:25.738    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:25.738    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:25.738    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:25.738    11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:25.738    "name": "raid_bdev1",
00:17:25.738    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:25.738    "strip_size_kb": 0,
00:17:25.738    "state": "online",
00:17:25.738    "raid_level": "raid1",
00:17:25.738    "superblock": true,
00:17:25.738    "num_base_bdevs": 2,
00:17:25.738    "num_base_bdevs_discovered": 1,
00:17:25.738    "num_base_bdevs_operational": 1,
00:17:25.738    "base_bdevs_list": [
00:17:25.738      {
00:17:25.738        "name": null,
00:17:25.738        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:25.738        "is_configured": false,
00:17:25.738        "data_offset": 0,
00:17:25.738        "data_size": 7936
00:17:25.738      },
00:17:25.738      {
00:17:25.738        "name": "BaseBdev2",
00:17:25.738        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:25.738        "is_configured": true,
00:17:25.738        "data_offset": 256,
00:17:25.738        "data_size": 7936
00:17:25.738      }
00:17:25.738    ]
00:17:25.738  }'
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:25.738   11:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:26.307   11:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:17:26.307   11:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:26.307   11:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:26.307  [2024-12-16 11:38:52.111647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:17:26.307  [2024-12-16 11:38:52.111759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:26.307  [2024-12-16 11:38:52.111803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:17:26.307  [2024-12-16 11:38:52.111832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:26.307  [2024-12-16 11:38:52.112310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:26.307  [2024-12-16 11:38:52.112375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:17:26.307  [2024-12-16 11:38:52.112495] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:17:26.307  [2024-12-16 11:38:52.112545] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:17:26.307  [2024-12-16 11:38:52.112596] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:17:26.307  [2024-12-16 11:38:52.112645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:26.307  [2024-12-16 11:38:52.116779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50
00:17:26.307  spare
00:17:26.307   11:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:26.307   11:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1
00:17:26.307  [2024-12-16 11:38:52.118925] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:27.246   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:27.246   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:27.246   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:27.246   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:27.246   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:27.246    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:27.246    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:27.246    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:27.246    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:27.246    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:27.246   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:27.246    "name": "raid_bdev1",
00:17:27.246    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:27.246    "strip_size_kb": 0,
00:17:27.246    "state": "online",
00:17:27.246    "raid_level": "raid1",
00:17:27.246    "superblock": true,
00:17:27.246    "num_base_bdevs": 2,
00:17:27.246    "num_base_bdevs_discovered": 2,
00:17:27.246    "num_base_bdevs_operational": 2,
00:17:27.246    "process": {
00:17:27.246      "type": "rebuild",
00:17:27.246      "target": "spare",
00:17:27.246      "progress": {
00:17:27.246        "blocks": 2560,
00:17:27.246        "percent": 32
00:17:27.246      }
00:17:27.246    },
00:17:27.246    "base_bdevs_list": [
00:17:27.246      {
00:17:27.246        "name": "spare",
00:17:27.246        "uuid": "2020adfc-121b-55f9-aa07-b1dee69210c8",
00:17:27.246        "is_configured": true,
00:17:27.246        "data_offset": 256,
00:17:27.246        "data_size": 7936
00:17:27.246      },
00:17:27.246      {
00:17:27.246        "name": "BaseBdev2",
00:17:27.246        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:27.246        "is_configured": true,
00:17:27.246        "data_offset": 256,
00:17:27.246        "data_size": 7936
00:17:27.246      }
00:17:27.246    ]
00:17:27.246  }'
00:17:27.246    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:27.246   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:27.246    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:27.247   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:27.247   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:17:27.247   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:27.247   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:27.247  [2024-12-16 11:38:53.275058] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:27.507  [2024-12-16 11:38:53.323412] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:17:27.507  [2024-12-16 11:38:53.323544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:27.507  [2024-12-16 11:38:53.323584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:27.507  [2024-12-16 11:38:53.323610] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:27.507    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:27.507    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:27.507    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:27.507    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:27.507    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:27.507    "name": "raid_bdev1",
00:17:27.507    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:27.507    "strip_size_kb": 0,
00:17:27.507    "state": "online",
00:17:27.507    "raid_level": "raid1",
00:17:27.507    "superblock": true,
00:17:27.507    "num_base_bdevs": 2,
00:17:27.507    "num_base_bdevs_discovered": 1,
00:17:27.507    "num_base_bdevs_operational": 1,
00:17:27.507    "base_bdevs_list": [
00:17:27.507      {
00:17:27.507        "name": null,
00:17:27.507        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:27.507        "is_configured": false,
00:17:27.507        "data_offset": 0,
00:17:27.507        "data_size": 7936
00:17:27.507      },
00:17:27.507      {
00:17:27.507        "name": "BaseBdev2",
00:17:27.507        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:27.507        "is_configured": true,
00:17:27.507        "data_offset": 256,
00:17:27.507        "data_size": 7936
00:17:27.507      }
00:17:27.507    ]
00:17:27.507  }'
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:27.507   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:27.767   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:27.767   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:27.767   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:27.767   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:27.767   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:27.767    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:27.767    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:27.767    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:27.767    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:27.767    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:27.767   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:27.767    "name": "raid_bdev1",
00:17:27.767    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:27.767    "strip_size_kb": 0,
00:17:27.767    "state": "online",
00:17:27.767    "raid_level": "raid1",
00:17:27.767    "superblock": true,
00:17:27.767    "num_base_bdevs": 2,
00:17:27.767    "num_base_bdevs_discovered": 1,
00:17:27.767    "num_base_bdevs_operational": 1,
00:17:27.767    "base_bdevs_list": [
00:17:27.767      {
00:17:27.767        "name": null,
00:17:27.767        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:27.767        "is_configured": false,
00:17:27.767        "data_offset": 0,
00:17:27.767        "data_size": 7936
00:17:27.767      },
00:17:27.767      {
00:17:27.767        "name": "BaseBdev2",
00:17:27.767        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:27.767        "is_configured": true,
00:17:27.767        "data_offset": 256,
00:17:27.767        "data_size": 7936
00:17:27.767      }
00:17:27.767    ]
00:17:27.767  }'
00:17:28.027    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:28.027    11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:28.027  [2024-12-16 11:38:53.947038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:17:28.027  [2024-12-16 11:38:53.947098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:28.027  [2024-12-16 11:38:53.947120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:17:28.027  [2024-12-16 11:38:53.947131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:28.027  [2024-12-16 11:38:53.947528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:28.027  [2024-12-16 11:38:53.947564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:17:28.027  [2024-12-16 11:38:53.947633] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:17:28.027  [2024-12-16 11:38:53.947651] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:17:28.027  [2024-12-16 11:38:53.947659] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:17:28.027  [2024-12-16 11:38:53.947673] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:17:28.027  BaseBdev1
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:28.027   11:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:28.964   11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:28.964    11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:28.964    11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:28.964    11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:28.964    11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:28.964    11:38:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:28.964   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:28.964    "name": "raid_bdev1",
00:17:28.964    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:28.964    "strip_size_kb": 0,
00:17:28.964    "state": "online",
00:17:28.964    "raid_level": "raid1",
00:17:28.964    "superblock": true,
00:17:28.964    "num_base_bdevs": 2,
00:17:28.964    "num_base_bdevs_discovered": 1,
00:17:28.964    "num_base_bdevs_operational": 1,
00:17:28.964    "base_bdevs_list": [
00:17:28.964      {
00:17:28.964        "name": null,
00:17:28.964        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:28.964        "is_configured": false,
00:17:28.964        "data_offset": 0,
00:17:28.964        "data_size": 7936
00:17:28.964      },
00:17:28.964      {
00:17:28.964        "name": "BaseBdev2",
00:17:28.964        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:28.964        "is_configured": true,
00:17:28.964        "data_offset": 256,
00:17:28.964        "data_size": 7936
00:17:28.964      }
00:17:28.964    ]
00:17:28.964  }'
00:17:28.964   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:28.964   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:29.535    "name": "raid_bdev1",
00:17:29.535    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:29.535    "strip_size_kb": 0,
00:17:29.535    "state": "online",
00:17:29.535    "raid_level": "raid1",
00:17:29.535    "superblock": true,
00:17:29.535    "num_base_bdevs": 2,
00:17:29.535    "num_base_bdevs_discovered": 1,
00:17:29.535    "num_base_bdevs_operational": 1,
00:17:29.535    "base_bdevs_list": [
00:17:29.535      {
00:17:29.535        "name": null,
00:17:29.535        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:29.535        "is_configured": false,
00:17:29.535        "data_offset": 0,
00:17:29.535        "data_size": 7936
00:17:29.535      },
00:17:29.535      {
00:17:29.535        "name": "BaseBdev2",
00:17:29.535        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:29.535        "is_configured": true,
00:17:29.535        "data_offset": 256,
00:17:29.535        "data_size": 7936
00:17:29.535      }
00:17:29.535    ]
00:17:29.535  }'
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:29.535    11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:29.535   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:29.536  [2024-12-16 11:38:55.552482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:29.536  [2024-12-16 11:38:55.552705] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:17:29.536  [2024-12-16 11:38:55.552725] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:17:29.536  request:
00:17:29.536  {
00:17:29.536  "base_bdev": "BaseBdev1",
00:17:29.536  "raid_bdev": "raid_bdev1",
00:17:29.536  "method": "bdev_raid_add_base_bdev",
00:17:29.536  "req_id": 1
00:17:29.536  }
00:17:29.536  Got JSON-RPC error response
00:17:29.536  response:
00:17:29.536  {
00:17:29.536  "code": -22,
00:17:29.536  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:17:29.536  }
00:17:29.536   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:17:29.536   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1
00:17:29.536   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:29.536   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:29.536   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:29.536   11:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:30.949    "name": "raid_bdev1",
00:17:30.949    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:30.949    "strip_size_kb": 0,
00:17:30.949    "state": "online",
00:17:30.949    "raid_level": "raid1",
00:17:30.949    "superblock": true,
00:17:30.949    "num_base_bdevs": 2,
00:17:30.949    "num_base_bdevs_discovered": 1,
00:17:30.949    "num_base_bdevs_operational": 1,
00:17:30.949    "base_bdevs_list": [
00:17:30.949      {
00:17:30.949        "name": null,
00:17:30.949        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:30.949        "is_configured": false,
00:17:30.949        "data_offset": 0,
00:17:30.949        "data_size": 7936
00:17:30.949      },
00:17:30.949      {
00:17:30.949        "name": "BaseBdev2",
00:17:30.949        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:30.949        "is_configured": true,
00:17:30.949        "data_offset": 256,
00:17:30.949        "data_size": 7936
00:17:30.949      }
00:17:30.949    ]
00:17:30.949  }'
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:30.949   11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:30.949    11:38:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:30.949   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:30.949    "name": "raid_bdev1",
00:17:30.949    "uuid": "2259315b-caea-498c-ad1a-17cc4d15181b",
00:17:30.949    "strip_size_kb": 0,
00:17:30.949    "state": "online",
00:17:30.949    "raid_level": "raid1",
00:17:30.949    "superblock": true,
00:17:30.949    "num_base_bdevs": 2,
00:17:30.949    "num_base_bdevs_discovered": 1,
00:17:30.949    "num_base_bdevs_operational": 1,
00:17:30.949    "base_bdevs_list": [
00:17:30.949      {
00:17:30.949        "name": null,
00:17:30.949        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:30.949        "is_configured": false,
00:17:30.949        "data_offset": 0,
00:17:30.949        "data_size": 7936
00:17:30.949      },
00:17:30.949      {
00:17:30.949        "name": "BaseBdev2",
00:17:30.949        "uuid": "31b2f384-5d6a-5392-a3c9-1bb0688b0e28",
00:17:30.949        "is_configured": true,
00:17:30.949        "data_offset": 256,
00:17:30.949        "data_size": 7936
00:17:30.949      }
00:17:30.949    ]
00:17:30.949  }'
00:17:30.949    11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:31.209    11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97244
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97244 ']'
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97244
00:17:31.209    11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:17:31.209    11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97244
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:17:31.209  killing process with pid 97244
00:17:31.209  Received shutdown signal, test time was about 60.000000 seconds
00:17:31.209  
00:17:31.209                                                                                                  Latency(us)
00:17:31.209  
[2024-12-16T11:38:57.276Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:31.209  
[2024-12-16T11:38:57.276Z]  ===================================================================================================================
00:17:31.209  
[2024-12-16T11:38:57.276Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97244'
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97244
00:17:31.209  [2024-12-16 11:38:57.126404] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:31.209  [2024-12-16 11:38:57.126553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:31.209   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97244
00:17:31.209  [2024-12-16 11:38:57.126613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:31.209  [2024-12-16 11:38:57.126623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:17:31.209  [2024-12-16 11:38:57.157516] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:31.468   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0
00:17:31.468  
00:17:31.468  real	0m18.525s
00:17:31.468  user	0m24.609s
00:17:31.468  sys	0m2.712s
00:17:31.469   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable
00:17:31.469  ************************************
00:17:31.469  END TEST raid_rebuild_test_sb_4k
00:17:31.469  ************************************
00:17:31.469   11:38:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x
00:17:31.469   11:38:57 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32'
00:17:31.469   11:38:57 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true
00:17:31.469   11:38:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:17:31.469   11:38:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:17:31.469   11:38:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:17:31.469  ************************************
00:17:31.469  START TEST raid_state_function_test_sb_md_separate
00:17:31.469  ************************************
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:17:31.469    11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:17:31.469  Process raid pid: 97918
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97918
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97918'
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97918
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97918 ']'
00:17:31.469  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable
00:17:31.469   11:38:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:31.728  [2024-12-16 11:38:57.568251] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:17:31.728  [2024-12-16 11:38:57.568499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:31.728  [2024-12-16 11:38:57.733832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:31.728  [2024-12-16 11:38:57.782054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:17:31.987  [2024-12-16 11:38:57.824714] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:31.987  [2024-12-16 11:38:57.824761] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:32.557  [2024-12-16 11:38:58.398408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:32.557  [2024-12-16 11:38:58.398462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:32.557  [2024-12-16 11:38:58.398475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:32.557  [2024-12-16 11:38:58.398485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:32.557    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:32.557    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:32.557    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:32.557    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:32.557    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:32.557    "name": "Existed_Raid",
00:17:32.557    "uuid": "834a85e1-2321-4ed0-98c5-4508ad5faca0",
00:17:32.557    "strip_size_kb": 0,
00:17:32.557    "state": "configuring",
00:17:32.557    "raid_level": "raid1",
00:17:32.557    "superblock": true,
00:17:32.557    "num_base_bdevs": 2,
00:17:32.557    "num_base_bdevs_discovered": 0,
00:17:32.557    "num_base_bdevs_operational": 2,
00:17:32.557    "base_bdevs_list": [
00:17:32.557      {
00:17:32.557        "name": "BaseBdev1",
00:17:32.557        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:32.557        "is_configured": false,
00:17:32.557        "data_offset": 0,
00:17:32.557        "data_size": 0
00:17:32.557      },
00:17:32.557      {
00:17:32.557        "name": "BaseBdev2",
00:17:32.557        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:32.557        "is_configured": false,
00:17:32.557        "data_offset": 0,
00:17:32.557        "data_size": 0
00:17:32.557      }
00:17:32.557    ]
00:17:32.557  }'
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:32.557   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:32.817  [2024-12-16 11:38:58.865574] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:32.817  [2024-12-16 11:38:58.865683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:32.817  [2024-12-16 11:38:58.877587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:32.817  [2024-12-16 11:38:58.877678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:32.817  [2024-12-16 11:38:58.877712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:32.817  [2024-12-16 11:38:58.877740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:32.817   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.077  [2024-12-16 11:38:58.898987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:33.077  BaseBdev1
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.077   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.077  [
00:17:33.077  {
00:17:33.077  "name": "BaseBdev1",
00:17:33.077  "aliases": [
00:17:33.077  "c98fb519-a558-4307-8fb7-5cb8d551ef77"
00:17:33.077  ],
00:17:33.077  "product_name": "Malloc disk",
00:17:33.077  "block_size": 4096,
00:17:33.077  "num_blocks": 8192,
00:17:33.077  "uuid": "c98fb519-a558-4307-8fb7-5cb8d551ef77",
00:17:33.077  "md_size": 32,
00:17:33.077  "md_interleave": false,
00:17:33.077  "dif_type": 0,
00:17:33.077  "assigned_rate_limits": {
00:17:33.077  "rw_ios_per_sec": 0,
00:17:33.077  "rw_mbytes_per_sec": 0,
00:17:33.077  "r_mbytes_per_sec": 0,
00:17:33.077  "w_mbytes_per_sec": 0
00:17:33.077  },
00:17:33.077  "claimed": true,
00:17:33.077  "claim_type": "exclusive_write",
00:17:33.077  "zoned": false,
00:17:33.077  "supported_io_types": {
00:17:33.077  "read": true,
00:17:33.077  "write": true,
00:17:33.077  "unmap": true,
00:17:33.077  "flush": true,
00:17:33.077  "reset": true,
00:17:33.077  "nvme_admin": false,
00:17:33.077  "nvme_io": false,
00:17:33.077  "nvme_io_md": false,
00:17:33.077  "write_zeroes": true,
00:17:33.077  "zcopy": true,
00:17:33.077  "get_zone_info": false,
00:17:33.077  "zone_management": false,
00:17:33.077  "zone_append": false,
00:17:33.077  "compare": false,
00:17:33.077  "compare_and_write": false,
00:17:33.077  "abort": true,
00:17:33.077  "seek_hole": false,
00:17:33.077  "seek_data": false,
00:17:33.077  "copy": true,
00:17:33.077  "nvme_iov_md": false
00:17:33.077  },
00:17:33.077  "memory_domains": [
00:17:33.077  {
00:17:33.077  "dma_device_id": "system",
00:17:33.077  "dma_device_type": 1
00:17:33.077  },
00:17:33.077  {
00:17:33.077  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:33.078  "dma_device_type": 2
00:17:33.078  }
00:17:33.078  ],
00:17:33.078  "driver_specific": {}
00:17:33.078  }
00:17:33.078  ]
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:33.078    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:33.078    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:33.078    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.078    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.078    11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:33.078    "name": "Existed_Raid",
00:17:33.078    "uuid": "6aeb4f77-b254-4e33-9ce5-5601c62829c0",
00:17:33.078    "strip_size_kb": 0,
00:17:33.078    "state": "configuring",
00:17:33.078    "raid_level": "raid1",
00:17:33.078    "superblock": true,
00:17:33.078    "num_base_bdevs": 2,
00:17:33.078    "num_base_bdevs_discovered": 1,
00:17:33.078    "num_base_bdevs_operational": 2,
00:17:33.078    "base_bdevs_list": [
00:17:33.078      {
00:17:33.078        "name": "BaseBdev1",
00:17:33.078        "uuid": "c98fb519-a558-4307-8fb7-5cb8d551ef77",
00:17:33.078        "is_configured": true,
00:17:33.078        "data_offset": 256,
00:17:33.078        "data_size": 7936
00:17:33.078      },
00:17:33.078      {
00:17:33.078        "name": "BaseBdev2",
00:17:33.078        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:33.078        "is_configured": false,
00:17:33.078        "data_offset": 0,
00:17:33.078        "data_size": 0
00:17:33.078      }
00:17:33.078    ]
00:17:33.078  }'
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:33.078   11:38:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.337  [2024-12-16 11:38:59.378246] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:33.337  [2024-12-16 11:38:59.378365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.337  [2024-12-16 11:38:59.390289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:33.337  [2024-12-16 11:38:59.392179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:33.337  [2024-12-16 11:38:59.392270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:33.337   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:33.597    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:33.597    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:33.597    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.597    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.597    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:33.597   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:33.597    "name": "Existed_Raid",
00:17:33.597    "uuid": "35f8c678-d622-4f6e-8093-a690488a3af2",
00:17:33.597    "strip_size_kb": 0,
00:17:33.597    "state": "configuring",
00:17:33.597    "raid_level": "raid1",
00:17:33.597    "superblock": true,
00:17:33.597    "num_base_bdevs": 2,
00:17:33.597    "num_base_bdevs_discovered": 1,
00:17:33.597    "num_base_bdevs_operational": 2,
00:17:33.597    "base_bdevs_list": [
00:17:33.597      {
00:17:33.597        "name": "BaseBdev1",
00:17:33.597        "uuid": "c98fb519-a558-4307-8fb7-5cb8d551ef77",
00:17:33.597        "is_configured": true,
00:17:33.597        "data_offset": 256,
00:17:33.597        "data_size": 7936
00:17:33.597      },
00:17:33.597      {
00:17:33.597        "name": "BaseBdev2",
00:17:33.597        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:33.597        "is_configured": false,
00:17:33.597        "data_offset": 0,
00:17:33.597        "data_size": 0
00:17:33.597      }
00:17:33.597    ]
00:17:33.597  }'
00:17:33.597   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:33.597   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:33.856   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2
00:17:33.856   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:33.856   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.116  [2024-12-16 11:38:59.925727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:34.116  [2024-12-16 11:38:59.926084] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:17:34.116  [2024-12-16 11:38:59.926162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:34.116  [2024-12-16 11:38:59.926332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:17:34.116  [2024-12-16 11:38:59.926529] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:17:34.116  [2024-12-16 11:38:59.926615] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:17:34.116  BaseBdev2
00:17:34.116  [2024-12-16 11:38:59.926783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.116  [
00:17:34.116  {
00:17:34.116  "name": "BaseBdev2",
00:17:34.116  "aliases": [
00:17:34.116  "8d4f3d52-bfa3-432f-b659-3b744f06daed"
00:17:34.116  ],
00:17:34.116  "product_name": "Malloc disk",
00:17:34.116  "block_size": 4096,
00:17:34.116  "num_blocks": 8192,
00:17:34.116  "uuid": "8d4f3d52-bfa3-432f-b659-3b744f06daed",
00:17:34.116  "md_size": 32,
00:17:34.116  "md_interleave": false,
00:17:34.116  "dif_type": 0,
00:17:34.116  "assigned_rate_limits": {
00:17:34.116  "rw_ios_per_sec": 0,
00:17:34.116  "rw_mbytes_per_sec": 0,
00:17:34.116  "r_mbytes_per_sec": 0,
00:17:34.116  "w_mbytes_per_sec": 0
00:17:34.116  },
00:17:34.116  "claimed": true,
00:17:34.116  "claim_type": "exclusive_write",
00:17:34.116  "zoned": false,
00:17:34.116  "supported_io_types": {
00:17:34.116  "read": true,
00:17:34.116  "write": true,
00:17:34.116  "unmap": true,
00:17:34.116  "flush": true,
00:17:34.116  "reset": true,
00:17:34.116  "nvme_admin": false,
00:17:34.116  "nvme_io": false,
00:17:34.116  "nvme_io_md": false,
00:17:34.116  "write_zeroes": true,
00:17:34.116  "zcopy": true,
00:17:34.116  "get_zone_info": false,
00:17:34.116  "zone_management": false,
00:17:34.116  "zone_append": false,
00:17:34.116  "compare": false,
00:17:34.116  "compare_and_write": false,
00:17:34.116  "abort": true,
00:17:34.116  "seek_hole": false,
00:17:34.116  "seek_data": false,
00:17:34.116  "copy": true,
00:17:34.116  "nvme_iov_md": false
00:17:34.116  },
00:17:34.116  "memory_domains": [
00:17:34.116  {
00:17:34.116  "dma_device_id": "system",
00:17:34.116  "dma_device_type": 1
00:17:34.116  },
00:17:34.116  {
00:17:34.116  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:34.116  "dma_device_type": 2
00:17:34.116  }
00:17:34.116  ],
00:17:34.116  "driver_specific": {}
00:17:34.116  }
00:17:34.116  ]
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:34.116   11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:34.116    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:34.117    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:34.117    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.117    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.117    11:38:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.117   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:34.117    "name": "Existed_Raid",
00:17:34.117    "uuid": "35f8c678-d622-4f6e-8093-a690488a3af2",
00:17:34.117    "strip_size_kb": 0,
00:17:34.117    "state": "online",
00:17:34.117    "raid_level": "raid1",
00:17:34.117    "superblock": true,
00:17:34.117    "num_base_bdevs": 2,
00:17:34.117    "num_base_bdevs_discovered": 2,
00:17:34.117    "num_base_bdevs_operational": 2,
00:17:34.117    "base_bdevs_list": [
00:17:34.117      {
00:17:34.117        "name": "BaseBdev1",
00:17:34.117        "uuid": "c98fb519-a558-4307-8fb7-5cb8d551ef77",
00:17:34.117        "is_configured": true,
00:17:34.117        "data_offset": 256,
00:17:34.117        "data_size": 7936
00:17:34.117      },
00:17:34.117      {
00:17:34.117        "name": "BaseBdev2",
00:17:34.117        "uuid": "8d4f3d52-bfa3-432f-b659-3b744f06daed",
00:17:34.117        "is_configured": true,
00:17:34.117        "data_offset": 256,
00:17:34.117        "data_size": 7936
00:17:34.117      }
00:17:34.117    ]
00:17:34.117  }'
00:17:34.117   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:34.117   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.376   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:17:34.376   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:17:34.376   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:17:34.376   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:17:34.376   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name
00:17:34.376   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:17:34.376    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:17:34.376    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:17:34.376    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.376    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.376  [2024-12-16 11:39:00.385335] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:34.376    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.376   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:17:34.376    "name": "Existed_Raid",
00:17:34.376    "aliases": [
00:17:34.376      "35f8c678-d622-4f6e-8093-a690488a3af2"
00:17:34.376    ],
00:17:34.376    "product_name": "Raid Volume",
00:17:34.376    "block_size": 4096,
00:17:34.376    "num_blocks": 7936,
00:17:34.376    "uuid": "35f8c678-d622-4f6e-8093-a690488a3af2",
00:17:34.376    "md_size": 32,
00:17:34.376    "md_interleave": false,
00:17:34.376    "dif_type": 0,
00:17:34.376    "assigned_rate_limits": {
00:17:34.376      "rw_ios_per_sec": 0,
00:17:34.376      "rw_mbytes_per_sec": 0,
00:17:34.376      "r_mbytes_per_sec": 0,
00:17:34.376      "w_mbytes_per_sec": 0
00:17:34.376    },
00:17:34.376    "claimed": false,
00:17:34.376    "zoned": false,
00:17:34.376    "supported_io_types": {
00:17:34.376      "read": true,
00:17:34.376      "write": true,
00:17:34.376      "unmap": false,
00:17:34.376      "flush": false,
00:17:34.376      "reset": true,
00:17:34.376      "nvme_admin": false,
00:17:34.376      "nvme_io": false,
00:17:34.376      "nvme_io_md": false,
00:17:34.376      "write_zeroes": true,
00:17:34.376      "zcopy": false,
00:17:34.376      "get_zone_info": false,
00:17:34.376      "zone_management": false,
00:17:34.376      "zone_append": false,
00:17:34.376      "compare": false,
00:17:34.376      "compare_and_write": false,
00:17:34.376      "abort": false,
00:17:34.376      "seek_hole": false,
00:17:34.376      "seek_data": false,
00:17:34.376      "copy": false,
00:17:34.376      "nvme_iov_md": false
00:17:34.376    },
00:17:34.376    "memory_domains": [
00:17:34.376      {
00:17:34.376        "dma_device_id": "system",
00:17:34.376        "dma_device_type": 1
00:17:34.376      },
00:17:34.376      {
00:17:34.376        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:34.376        "dma_device_type": 2
00:17:34.376      },
00:17:34.376      {
00:17:34.376        "dma_device_id": "system",
00:17:34.376        "dma_device_type": 1
00:17:34.376      },
00:17:34.376      {
00:17:34.376        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:34.376        "dma_device_type": 2
00:17:34.376      }
00:17:34.376    ],
00:17:34.376    "driver_specific": {
00:17:34.376      "raid": {
00:17:34.376        "uuid": "35f8c678-d622-4f6e-8093-a690488a3af2",
00:17:34.376        "strip_size_kb": 0,
00:17:34.376        "state": "online",
00:17:34.376        "raid_level": "raid1",
00:17:34.376        "superblock": true,
00:17:34.376        "num_base_bdevs": 2,
00:17:34.376        "num_base_bdevs_discovered": 2,
00:17:34.376        "num_base_bdevs_operational": 2,
00:17:34.376        "base_bdevs_list": [
00:17:34.376          {
00:17:34.376            "name": "BaseBdev1",
00:17:34.376            "uuid": "c98fb519-a558-4307-8fb7-5cb8d551ef77",
00:17:34.376            "is_configured": true,
00:17:34.376            "data_offset": 256,
00:17:34.376            "data_size": 7936
00:17:34.376          },
00:17:34.376          {
00:17:34.376            "name": "BaseBdev2",
00:17:34.376            "uuid": "8d4f3d52-bfa3-432f-b659-3b744f06daed",
00:17:34.376            "is_configured": true,
00:17:34.376            "data_offset": 256,
00:17:34.376            "data_size": 7936
00:17:34.376          }
00:17:34.376        ]
00:17:34.376      }
00:17:34.376    }
00:17:34.376  }'
00:17:34.376    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:17:34.636   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:17:34.636  BaseBdev2'
00:17:34.636    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:34.636   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0'
00:17:34.636   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:34.636    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:34.636    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:17:34.636    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0'
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]]
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0'
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]]
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.637  [2024-12-16 11:39:00.600709] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:34.637    11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:34.637    "name": "Existed_Raid",
00:17:34.637    "uuid": "35f8c678-d622-4f6e-8093-a690488a3af2",
00:17:34.637    "strip_size_kb": 0,
00:17:34.637    "state": "online",
00:17:34.637    "raid_level": "raid1",
00:17:34.637    "superblock": true,
00:17:34.637    "num_base_bdevs": 2,
00:17:34.637    "num_base_bdevs_discovered": 1,
00:17:34.637    "num_base_bdevs_operational": 1,
00:17:34.637    "base_bdevs_list": [
00:17:34.637      {
00:17:34.637        "name": null,
00:17:34.637        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:34.637        "is_configured": false,
00:17:34.637        "data_offset": 0,
00:17:34.637        "data_size": 7936
00:17:34.637      },
00:17:34.637      {
00:17:34.637        "name": "BaseBdev2",
00:17:34.637        "uuid": "8d4f3d52-bfa3-432f-b659-3b744f06daed",
00:17:34.637        "is_configured": true,
00:17:34.637        "data_offset": 256,
00:17:34.637        "data_size": 7936
00:17:34.637      }
00:17:34.637    ]
00:17:34.637  }'
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:34.637   11:39:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:35.206  [2024-12-16 11:39:01.095888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:35.206  [2024-12-16 11:39:01.096005] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:35.206  [2024-12-16 11:39:01.108632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:35.206  [2024-12-16 11:39:01.108678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:35.206  [2024-12-16 11:39:01.108696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97918
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97918 ']'
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97918
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:17:35.206    11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97918
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:17:35.206  killing process with pid 97918
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97918'
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97918
00:17:35.206  [2024-12-16 11:39:01.196244] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:35.206   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97918
00:17:35.206  [2024-12-16 11:39:01.197232] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:35.466   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0
00:17:35.466  
00:17:35.466  real	0m3.970s
00:17:35.466  user	0m6.203s
00:17:35.466  sys	0m0.882s
00:17:35.466   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable
00:17:35.466  ************************************
00:17:35.466  END TEST raid_state_function_test_sb_md_separate
00:17:35.466  ************************************
00:17:35.466   11:39:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:35.466   11:39:01 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2
00:17:35.466   11:39:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:17:35.466   11:39:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:17:35.466   11:39:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:17:35.466  ************************************
00:17:35.466  START TEST raid_superblock_test_md_separate
00:17:35.466  ************************************
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']'
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98159
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98159
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98159 ']'
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100
00:17:35.466  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable
00:17:35.466   11:39:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:35.725  [2024-12-16 11:39:01.597807] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:17:35.725  [2024-12-16 11:39:01.597934] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98159 ]
00:17:35.725  [2024-12-16 11:39:01.759000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:35.985  [2024-12-16 11:39:01.806372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:17:35.985  [2024-12-16 11:39:01.849243] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:35.985  [2024-12-16 11:39:01.849289] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:36.553   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:36.554  malloc1
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:36.554  [2024-12-16 11:39:02.504026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:36.554  [2024-12-16 11:39:02.504091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:36.554  [2024-12-16 11:39:02.504113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:17:36.554  [2024-12-16 11:39:02.504124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:36.554  [2024-12-16 11:39:02.506079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:36.554  [2024-12-16 11:39:02.506117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:36.554  pt1
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:36.554  malloc2
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:36.554  [2024-12-16 11:39:02.541425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:36.554  [2024-12-16 11:39:02.541499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:36.554  [2024-12-16 11:39:02.541520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:17:36.554  [2024-12-16 11:39:02.541554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:36.554  [2024-12-16 11:39:02.543817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:36.554  [2024-12-16 11:39:02.543856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:36.554  pt2
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:36.554  [2024-12-16 11:39:02.553410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:36.554  [2024-12-16 11:39:02.555218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:36.554  [2024-12-16 11:39:02.555381] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:17:36.554  [2024-12-16 11:39:02.555398] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:36.554  [2024-12-16 11:39:02.555489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:17:36.554  [2024-12-16 11:39:02.555588] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:17:36.554  [2024-12-16 11:39:02.555598] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:17:36.554  [2024-12-16 11:39:02.555681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:36.554    11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:36.554    11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:36.554    11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:36.554    11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:36.554    11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:36.554    "name": "raid_bdev1",
00:17:36.554    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:36.554    "strip_size_kb": 0,
00:17:36.554    "state": "online",
00:17:36.554    "raid_level": "raid1",
00:17:36.554    "superblock": true,
00:17:36.554    "num_base_bdevs": 2,
00:17:36.554    "num_base_bdevs_discovered": 2,
00:17:36.554    "num_base_bdevs_operational": 2,
00:17:36.554    "base_bdevs_list": [
00:17:36.554      {
00:17:36.554        "name": "pt1",
00:17:36.554        "uuid": "00000000-0000-0000-0000-000000000001",
00:17:36.554        "is_configured": true,
00:17:36.554        "data_offset": 256,
00:17:36.554        "data_size": 7936
00:17:36.554      },
00:17:36.554      {
00:17:36.554        "name": "pt2",
00:17:36.554        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:36.554        "is_configured": true,
00:17:36.554        "data_offset": 256,
00:17:36.554        "data_size": 7936
00:17:36.554      }
00:17:36.554    ]
00:17:36.554  }'
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:36.554   11:39:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.124  [2024-12-16 11:39:03.016981] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:17:37.124    "name": "raid_bdev1",
00:17:37.124    "aliases": [
00:17:37.124      "5dd52772-67cb-45c1-bb78-91e7a9d43fe2"
00:17:37.124    ],
00:17:37.124    "product_name": "Raid Volume",
00:17:37.124    "block_size": 4096,
00:17:37.124    "num_blocks": 7936,
00:17:37.124    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:37.124    "md_size": 32,
00:17:37.124    "md_interleave": false,
00:17:37.124    "dif_type": 0,
00:17:37.124    "assigned_rate_limits": {
00:17:37.124      "rw_ios_per_sec": 0,
00:17:37.124      "rw_mbytes_per_sec": 0,
00:17:37.124      "r_mbytes_per_sec": 0,
00:17:37.124      "w_mbytes_per_sec": 0
00:17:37.124    },
00:17:37.124    "claimed": false,
00:17:37.124    "zoned": false,
00:17:37.124    "supported_io_types": {
00:17:37.124      "read": true,
00:17:37.124      "write": true,
00:17:37.124      "unmap": false,
00:17:37.124      "flush": false,
00:17:37.124      "reset": true,
00:17:37.124      "nvme_admin": false,
00:17:37.124      "nvme_io": false,
00:17:37.124      "nvme_io_md": false,
00:17:37.124      "write_zeroes": true,
00:17:37.124      "zcopy": false,
00:17:37.124      "get_zone_info": false,
00:17:37.124      "zone_management": false,
00:17:37.124      "zone_append": false,
00:17:37.124      "compare": false,
00:17:37.124      "compare_and_write": false,
00:17:37.124      "abort": false,
00:17:37.124      "seek_hole": false,
00:17:37.124      "seek_data": false,
00:17:37.124      "copy": false,
00:17:37.124      "nvme_iov_md": false
00:17:37.124    },
00:17:37.124    "memory_domains": [
00:17:37.124      {
00:17:37.124        "dma_device_id": "system",
00:17:37.124        "dma_device_type": 1
00:17:37.124      },
00:17:37.124      {
00:17:37.124        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:37.124        "dma_device_type": 2
00:17:37.124      },
00:17:37.124      {
00:17:37.124        "dma_device_id": "system",
00:17:37.124        "dma_device_type": 1
00:17:37.124      },
00:17:37.124      {
00:17:37.124        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:37.124        "dma_device_type": 2
00:17:37.124      }
00:17:37.124    ],
00:17:37.124    "driver_specific": {
00:17:37.124      "raid": {
00:17:37.124        "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:37.124        "strip_size_kb": 0,
00:17:37.124        "state": "online",
00:17:37.124        "raid_level": "raid1",
00:17:37.124        "superblock": true,
00:17:37.124        "num_base_bdevs": 2,
00:17:37.124        "num_base_bdevs_discovered": 2,
00:17:37.124        "num_base_bdevs_operational": 2,
00:17:37.124        "base_bdevs_list": [
00:17:37.124          {
00:17:37.124            "name": "pt1",
00:17:37.124            "uuid": "00000000-0000-0000-0000-000000000001",
00:17:37.124            "is_configured": true,
00:17:37.124            "data_offset": 256,
00:17:37.124            "data_size": 7936
00:17:37.124          },
00:17:37.124          {
00:17:37.124            "name": "pt2",
00:17:37.124            "uuid": "00000000-0000-0000-0000-000000000002",
00:17:37.124            "is_configured": true,
00:17:37.124            "data_offset": 256,
00:17:37.124            "data_size": 7936
00:17:37.124          }
00:17:37.124        ]
00:17:37.124      }
00:17:37.124    }
00:17:37.124  }'
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:17:37.124  pt2'
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0'
00:17:37.124   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.124    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0'
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]]
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0'
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]]
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.385  [2024-12-16 11:39:03.256406] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5dd52772-67cb-45c1-bb78-91e7a9d43fe2
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 5dd52772-67cb-45c1-bb78-91e7a9d43fe2 ']'
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.385  [2024-12-16 11:39:03.284126] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:37.385  [2024-12-16 11:39:03.284156] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:37.385  [2024-12-16 11:39:03.284239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:37.385  [2024-12-16 11:39:03.284300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:37.385  [2024-12-16 11:39:03.284310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:37.385    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:17:37.385   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.386   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.386  [2024-12-16 11:39:03.423962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:17:37.386  [2024-12-16 11:39:03.426008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:17:37.386  [2024-12-16 11:39:03.426081] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:17:37.386  [2024-12-16 11:39:03.426128] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:17:37.386  [2024-12-16 11:39:03.426144] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:37.386  [2024-12-16 11:39:03.426154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:17:37.386  request:
00:17:37.386  {
00:17:37.386  "name": "raid_bdev1",
00:17:37.386  "raid_level": "raid1",
00:17:37.386  "base_bdevs": [
00:17:37.386  "malloc1",
00:17:37.386  "malloc2"
00:17:37.386  ],
00:17:37.386  "superblock": false,
00:17:37.386  "method": "bdev_raid_create",
00:17:37.386  "req_id": 1
00:17:37.386  }
00:17:37.386  Got JSON-RPC error response
00:17:37.386  response:
00:17:37.386  {
00:17:37.386  "code": -17,
00:17:37.386  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:17:37.386  }
00:17:37.386   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:17:37.386   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1
00:17:37.386   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:37.386   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:37.386   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:37.386    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:37.386    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:17:37.386    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.386    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.386    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.647   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:17:37.647   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.648  [2024-12-16 11:39:03.487750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:37.648  [2024-12-16 11:39:03.487800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:37.648  [2024-12-16 11:39:03.487819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:17:37.648  [2024-12-16 11:39:03.487829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:37.648  [2024-12-16 11:39:03.489746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:37.648  [2024-12-16 11:39:03.489776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:37.648  [2024-12-16 11:39:03.489829] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:17:37.648  [2024-12-16 11:39:03.489871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:37.648  pt1
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:37.648    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:37.648    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.648    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:37.648    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.648    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:37.648    "name": "raid_bdev1",
00:17:37.648    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:37.648    "strip_size_kb": 0,
00:17:37.648    "state": "configuring",
00:17:37.648    "raid_level": "raid1",
00:17:37.648    "superblock": true,
00:17:37.648    "num_base_bdevs": 2,
00:17:37.648    "num_base_bdevs_discovered": 1,
00:17:37.648    "num_base_bdevs_operational": 2,
00:17:37.648    "base_bdevs_list": [
00:17:37.648      {
00:17:37.648        "name": "pt1",
00:17:37.648        "uuid": "00000000-0000-0000-0000-000000000001",
00:17:37.648        "is_configured": true,
00:17:37.648        "data_offset": 256,
00:17:37.648        "data_size": 7936
00:17:37.648      },
00:17:37.648      {
00:17:37.648        "name": null,
00:17:37.648        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:37.648        "is_configured": false,
00:17:37.648        "data_offset": 256,
00:17:37.648        "data_size": 7936
00:17:37.648      }
00:17:37.648    ]
00:17:37.648  }'
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:37.648   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']'
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.908  [2024-12-16 11:39:03.931069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:37.908  [2024-12-16 11:39:03.931140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:37.908  [2024-12-16 11:39:03.931182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:17:37.908  [2024-12-16 11:39:03.931193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:37.908  [2024-12-16 11:39:03.931430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:37.908  [2024-12-16 11:39:03.931470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:37.908  [2024-12-16 11:39:03.931526] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:17:37.908  [2024-12-16 11:39:03.931563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:37.908  [2024-12-16 11:39:03.931662] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:17:37.908  [2024-12-16 11:39:03.931676] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:37.908  [2024-12-16 11:39:03.931764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:17:37.908  [2024-12-16 11:39:03.931850] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:17:37.908  [2024-12-16 11:39:03.931865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:17:37.908  [2024-12-16 11:39:03.931932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:37.908  pt2
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:37.908   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:37.908    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:37.908    11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:37.908    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:37.908    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:37.908    11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:38.167   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:38.167    "name": "raid_bdev1",
00:17:38.167    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:38.167    "strip_size_kb": 0,
00:17:38.167    "state": "online",
00:17:38.167    "raid_level": "raid1",
00:17:38.167    "superblock": true,
00:17:38.167    "num_base_bdevs": 2,
00:17:38.167    "num_base_bdevs_discovered": 2,
00:17:38.167    "num_base_bdevs_operational": 2,
00:17:38.167    "base_bdevs_list": [
00:17:38.167      {
00:17:38.167        "name": "pt1",
00:17:38.167        "uuid": "00000000-0000-0000-0000-000000000001",
00:17:38.167        "is_configured": true,
00:17:38.167        "data_offset": 256,
00:17:38.167        "data_size": 7936
00:17:38.167      },
00:17:38.167      {
00:17:38.167        "name": "pt2",
00:17:38.167        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:38.167        "is_configured": true,
00:17:38.167        "data_offset": 256,
00:17:38.167        "data_size": 7936
00:17:38.167      }
00:17:38.167    ]
00:17:38.167  }'
00:17:38.167   11:39:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:38.167   11:39:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:17:38.428    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:38.428    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:17:38.428    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:38.428    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:38.428  [2024-12-16 11:39:04.398550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:38.428    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:17:38.428    "name": "raid_bdev1",
00:17:38.428    "aliases": [
00:17:38.428      "5dd52772-67cb-45c1-bb78-91e7a9d43fe2"
00:17:38.428    ],
00:17:38.428    "product_name": "Raid Volume",
00:17:38.428    "block_size": 4096,
00:17:38.428    "num_blocks": 7936,
00:17:38.428    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:38.428    "md_size": 32,
00:17:38.428    "md_interleave": false,
00:17:38.428    "dif_type": 0,
00:17:38.428    "assigned_rate_limits": {
00:17:38.428      "rw_ios_per_sec": 0,
00:17:38.428      "rw_mbytes_per_sec": 0,
00:17:38.428      "r_mbytes_per_sec": 0,
00:17:38.428      "w_mbytes_per_sec": 0
00:17:38.428    },
00:17:38.428    "claimed": false,
00:17:38.428    "zoned": false,
00:17:38.428    "supported_io_types": {
00:17:38.428      "read": true,
00:17:38.428      "write": true,
00:17:38.428      "unmap": false,
00:17:38.428      "flush": false,
00:17:38.428      "reset": true,
00:17:38.428      "nvme_admin": false,
00:17:38.428      "nvme_io": false,
00:17:38.428      "nvme_io_md": false,
00:17:38.428      "write_zeroes": true,
00:17:38.428      "zcopy": false,
00:17:38.428      "get_zone_info": false,
00:17:38.428      "zone_management": false,
00:17:38.428      "zone_append": false,
00:17:38.428      "compare": false,
00:17:38.428      "compare_and_write": false,
00:17:38.428      "abort": false,
00:17:38.428      "seek_hole": false,
00:17:38.428      "seek_data": false,
00:17:38.428      "copy": false,
00:17:38.428      "nvme_iov_md": false
00:17:38.428    },
00:17:38.428    "memory_domains": [
00:17:38.428      {
00:17:38.428        "dma_device_id": "system",
00:17:38.428        "dma_device_type": 1
00:17:38.428      },
00:17:38.428      {
00:17:38.428        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:38.428        "dma_device_type": 2
00:17:38.428      },
00:17:38.428      {
00:17:38.428        "dma_device_id": "system",
00:17:38.428        "dma_device_type": 1
00:17:38.428      },
00:17:38.428      {
00:17:38.428        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:38.428        "dma_device_type": 2
00:17:38.428      }
00:17:38.428    ],
00:17:38.428    "driver_specific": {
00:17:38.428      "raid": {
00:17:38.428        "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:38.428        "strip_size_kb": 0,
00:17:38.428        "state": "online",
00:17:38.428        "raid_level": "raid1",
00:17:38.428        "superblock": true,
00:17:38.428        "num_base_bdevs": 2,
00:17:38.428        "num_base_bdevs_discovered": 2,
00:17:38.428        "num_base_bdevs_operational": 2,
00:17:38.428        "base_bdevs_list": [
00:17:38.428          {
00:17:38.428            "name": "pt1",
00:17:38.428            "uuid": "00000000-0000-0000-0000-000000000001",
00:17:38.428            "is_configured": true,
00:17:38.428            "data_offset": 256,
00:17:38.428            "data_size": 7936
00:17:38.428          },
00:17:38.428          {
00:17:38.428            "name": "pt2",
00:17:38.428            "uuid": "00000000-0000-0000-0000-000000000002",
00:17:38.428            "is_configured": true,
00:17:38.428            "data_offset": 256,
00:17:38.428            "data_size": 7936
00:17:38.428          }
00:17:38.428        ]
00:17:38.428      }
00:17:38.428    }
00:17:38.428  }'
00:17:38.428    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:17:38.428   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:17:38.428  pt2'
00:17:38.428    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0'
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0'
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]]
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0'
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]]
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:38.688  [2024-12-16 11:39:04.646038] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 5dd52772-67cb-45c1-bb78-91e7a9d43fe2 '!=' 5dd52772-67cb-45c1-bb78-91e7a9d43fe2 ']'
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:38.688  [2024-12-16 11:39:04.673761] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:38.688    11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:38.688   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:38.688    "name": "raid_bdev1",
00:17:38.688    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:38.688    "strip_size_kb": 0,
00:17:38.688    "state": "online",
00:17:38.688    "raid_level": "raid1",
00:17:38.688    "superblock": true,
00:17:38.688    "num_base_bdevs": 2,
00:17:38.688    "num_base_bdevs_discovered": 1,
00:17:38.688    "num_base_bdevs_operational": 1,
00:17:38.688    "base_bdevs_list": [
00:17:38.688      {
00:17:38.689        "name": null,
00:17:38.689        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:38.689        "is_configured": false,
00:17:38.689        "data_offset": 0,
00:17:38.689        "data_size": 7936
00:17:38.689      },
00:17:38.689      {
00:17:38.689        "name": "pt2",
00:17:38.689        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:38.689        "is_configured": true,
00:17:38.689        "data_offset": 256,
00:17:38.689        "data_size": 7936
00:17:38.689      }
00:17:38.689    ]
00:17:38.689  }'
00:17:38.689   11:39:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:38.689   11:39:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.258  [2024-12-16 11:39:05.117008] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:39.258  [2024-12-16 11:39:05.117045] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:39.258  [2024-12-16 11:39:05.117120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:39.258  [2024-12-16 11:39:05.117172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:39.258  [2024-12-16 11:39:05.117184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.258  [2024-12-16 11:39:05.188876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:39.258  [2024-12-16 11:39:05.188935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:39.258  [2024-12-16 11:39:05.188953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:17:39.258  [2024-12-16 11:39:05.188963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:39.258  [2024-12-16 11:39:05.191051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:39.258  [2024-12-16 11:39:05.191086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:39.258  [2024-12-16 11:39:05.191140] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:17:39.258  [2024-12-16 11:39:05.191171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:39.258  [2024-12-16 11:39:05.191236] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:17:39.258  [2024-12-16 11:39:05.191244] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:39.258  [2024-12-16 11:39:05.191332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:17:39.258  [2024-12-16 11:39:05.191422] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:17:39.258  [2024-12-16 11:39:05.191452] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:17:39.258  [2024-12-16 11:39:05.191528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:39.258  pt2
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:39.258    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:39.258    "name": "raid_bdev1",
00:17:39.258    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:39.258    "strip_size_kb": 0,
00:17:39.258    "state": "online",
00:17:39.258    "raid_level": "raid1",
00:17:39.258    "superblock": true,
00:17:39.258    "num_base_bdevs": 2,
00:17:39.258    "num_base_bdevs_discovered": 1,
00:17:39.258    "num_base_bdevs_operational": 1,
00:17:39.258    "base_bdevs_list": [
00:17:39.258      {
00:17:39.258        "name": null,
00:17:39.258        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:39.258        "is_configured": false,
00:17:39.258        "data_offset": 256,
00:17:39.258        "data_size": 7936
00:17:39.258      },
00:17:39.258      {
00:17:39.258        "name": "pt2",
00:17:39.258        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:39.258        "is_configured": true,
00:17:39.258        "data_offset": 256,
00:17:39.258        "data_size": 7936
00:17:39.258      }
00:17:39.258    ]
00:17:39.258  }'
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:39.258   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.523   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:39.523   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.523   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.523  [2024-12-16 11:39:05.576247] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:39.523  [2024-12-16 11:39:05.576284] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:39.523  [2024-12-16 11:39:05.576369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:39.523  [2024-12-16 11:39:05.576420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:39.523  [2024-12-16 11:39:05.576433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:17:39.523   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.524    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:39.524    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.524    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.797    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:17:39.797    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']'
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.797  [2024-12-16 11:39:05.640100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:39.797  [2024-12-16 11:39:05.640169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:39.797  [2024-12-16 11:39:05.640189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:17:39.797  [2024-12-16 11:39:05.640203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:39.797  [2024-12-16 11:39:05.642349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:39.797  [2024-12-16 11:39:05.642395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:39.797  [2024-12-16 11:39:05.642454] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:17:39.797  [2024-12-16 11:39:05.642499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:39.797  [2024-12-16 11:39:05.642628] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:17:39.797  [2024-12-16 11:39:05.642646] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:39.797  [2024-12-16 11:39:05.642663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:17:39.797  [2024-12-16 11:39:05.642699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:39.797  [2024-12-16 11:39:05.642763] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:17:39.797  [2024-12-16 11:39:05.642795] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:39.797  [2024-12-16 11:39:05.642875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:17:39.797  [2024-12-16 11:39:05.642959] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:17:39.797  [2024-12-16 11:39:05.642969] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:17:39.797  [2024-12-16 11:39:05.643053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:39.797  pt1
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']'
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:39.797    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:39.797    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:39.797    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:39.797    11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:39.797    11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:39.797    "name": "raid_bdev1",
00:17:39.797    "uuid": "5dd52772-67cb-45c1-bb78-91e7a9d43fe2",
00:17:39.797    "strip_size_kb": 0,
00:17:39.797    "state": "online",
00:17:39.797    "raid_level": "raid1",
00:17:39.797    "superblock": true,
00:17:39.797    "num_base_bdevs": 2,
00:17:39.797    "num_base_bdevs_discovered": 1,
00:17:39.797    "num_base_bdevs_operational": 1,
00:17:39.797    "base_bdevs_list": [
00:17:39.797      {
00:17:39.797        "name": null,
00:17:39.797        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:39.797        "is_configured": false,
00:17:39.797        "data_offset": 256,
00:17:39.797        "data_size": 7936
00:17:39.797      },
00:17:39.797      {
00:17:39.797        "name": "pt2",
00:17:39.797        "uuid": "00000000-0000-0000-0000-000000000002",
00:17:39.797        "is_configured": true,
00:17:39.797        "data_offset": 256,
00:17:39.797        "data_size": 7936
00:17:39.797      }
00:17:39.797    ]
00:17:39.797  }'
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:39.797   11:39:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:40.072   11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:40.072  [2024-12-16 11:39:06.115633] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:40.072    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 5dd52772-67cb-45c1-bb78-91e7a9d43fe2 '!=' 5dd52772-67cb-45c1-bb78-91e7a9d43fe2 ']'
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98159
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98159 ']'
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 98159
00:17:40.333    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:17:40.333    11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98159
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98159'
00:17:40.333  killing process with pid 98159
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 98159
00:17:40.333  [2024-12-16 11:39:06.199989] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:40.333  [2024-12-16 11:39:06.200080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:40.333  [2024-12-16 11:39:06.200130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:40.333  [2024-12-16 11:39:06.200144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:17:40.333   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 98159
00:17:40.333  [2024-12-16 11:39:06.224525] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:40.594   11:39:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0
00:17:40.594  
00:17:40.594  real	0m4.963s
00:17:40.594  user	0m8.072s
00:17:40.594  sys	0m1.114s
00:17:40.594   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable
00:17:40.594   11:39:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:40.594  ************************************
00:17:40.594  END TEST raid_superblock_test_md_separate
00:17:40.594  ************************************
00:17:40.594   11:39:06 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']'
00:17:40.594   11:39:06 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true
00:17:40.594   11:39:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:17:40.594   11:39:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:17:40.594   11:39:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:17:40.594  ************************************
00:17:40.594  START TEST raid_rebuild_test_sb_md_separate
00:17:40.594  ************************************
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:17:40.594    11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98471
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98471
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98471 ']'
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100
00:17:40.594  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable
00:17:40.594   11:39:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:40.594  I/O size of 3145728 is greater than zero copy threshold (65536).
00:17:40.594  Zero copy mechanism will not be used.
00:17:40.594  [2024-12-16 11:39:06.637906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:17:40.594  [2024-12-16 11:39:06.638025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98471 ]
00:17:40.854  [2024-12-16 11:39:06.799321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:40.854  [2024-12-16 11:39:06.846003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:17:40.854  [2024-12-16 11:39:06.888182] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:40.854  [2024-12-16 11:39:06.888219] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:41.423   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:17:41.423   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0
00:17:41.423   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:17:41.423   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc
00:17:41.423   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.423   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  BaseBdev1_malloc
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  [2024-12-16 11:39:07.510610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:17:41.683  [2024-12-16 11:39:07.510674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:41.683  [2024-12-16 11:39:07.510699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:17:41.683  [2024-12-16 11:39:07.510714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:41.683  [2024-12-16 11:39:07.512688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:41.683  [2024-12-16 11:39:07.512723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:17:41.683  BaseBdev1
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  BaseBdev2_malloc
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  [2024-12-16 11:39:07.550711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:17:41.683  [2024-12-16 11:39:07.550775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:41.683  [2024-12-16 11:39:07.550799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:17:41.683  [2024-12-16 11:39:07.550810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:41.683  [2024-12-16 11:39:07.552773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:41.683  [2024-12-16 11:39:07.552808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:17:41.683  BaseBdev2
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  spare_malloc
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  spare_delay
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  [2024-12-16 11:39:07.591939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:17:41.683  [2024-12-16 11:39:07.592010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:41.683  [2024-12-16 11:39:07.592039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:17:41.683  [2024-12-16 11:39:07.592052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:41.683  [2024-12-16 11:39:07.594248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:41.683  [2024-12-16 11:39:07.594290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:17:41.683  spare
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683  [2024-12-16 11:39:07.603943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:41.683  [2024-12-16 11:39:07.606021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:41.683  [2024-12-16 11:39:07.606198] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:17:41.683  [2024-12-16 11:39:07.606221] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:41.683  [2024-12-16 11:39:07.606319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:17:41.683  [2024-12-16 11:39:07.606427] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:17:41.683  [2024-12-16 11:39:07.606442] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:17:41.683  [2024-12-16 11:39:07.606561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:41.683    11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:41.683    11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:41.683    11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.683    11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:41.683    11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:41.683    "name": "raid_bdev1",
00:17:41.683    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:41.683    "strip_size_kb": 0,
00:17:41.683    "state": "online",
00:17:41.683    "raid_level": "raid1",
00:17:41.683    "superblock": true,
00:17:41.683    "num_base_bdevs": 2,
00:17:41.683    "num_base_bdevs_discovered": 2,
00:17:41.683    "num_base_bdevs_operational": 2,
00:17:41.683    "base_bdevs_list": [
00:17:41.683      {
00:17:41.683        "name": "BaseBdev1",
00:17:41.683        "uuid": "0fb2eecf-954e-5726-9b8f-4e959068c62e",
00:17:41.683        "is_configured": true,
00:17:41.683        "data_offset": 256,
00:17:41.683        "data_size": 7936
00:17:41.683      },
00:17:41.683      {
00:17:41.683        "name": "BaseBdev2",
00:17:41.683        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:41.683        "is_configured": true,
00:17:41.683        "data_offset": 256,
00:17:41.683        "data_size": 7936
00:17:41.683      }
00:17:41.683    ]
00:17:41.683  }'
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:41.683   11:39:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:17:42.253  [2024-12-16 11:39:08.067493] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:17:42.253    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']'
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:17:42.253   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:17:42.514  [2024-12-16 11:39:08.350751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:17:42.514  /dev/nbd0
00:17:42.514    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:17:42.514  1+0 records in
00:17:42.514  1+0 records out
00:17:42.514  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334788 s, 12.2 MB/s
00:17:42.514    11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']'
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1
00:17:42.514   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct
00:17:43.083  7936+0 records in
00:17:43.083  7936+0 records out
00:17:43.083  32505856 bytes (33 MB, 31 MiB) copied, 0.574691 s, 56.6 MB/s
00:17:43.083   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0
00:17:43.083   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:17:43.083   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:17:43.083   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list
00:17:43.083   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i
00:17:43.083   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:17:43.083   11:39:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:17:43.343    11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:17:43.343  [2024-12-16 11:39:09.209626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:43.343  [2024-12-16 11:39:09.221687] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:43.343    11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:43.343    11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:43.343    11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:43.343    11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:43.343    11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:43.343    "name": "raid_bdev1",
00:17:43.343    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:43.343    "strip_size_kb": 0,
00:17:43.343    "state": "online",
00:17:43.343    "raid_level": "raid1",
00:17:43.343    "superblock": true,
00:17:43.343    "num_base_bdevs": 2,
00:17:43.343    "num_base_bdevs_discovered": 1,
00:17:43.343    "num_base_bdevs_operational": 1,
00:17:43.343    "base_bdevs_list": [
00:17:43.343      {
00:17:43.343        "name": null,
00:17:43.343        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.343        "is_configured": false,
00:17:43.343        "data_offset": 0,
00:17:43.343        "data_size": 7936
00:17:43.343      },
00:17:43.343      {
00:17:43.343        "name": "BaseBdev2",
00:17:43.343        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:43.343        "is_configured": true,
00:17:43.343        "data_offset": 256,
00:17:43.343        "data_size": 7936
00:17:43.343      }
00:17:43.343    ]
00:17:43.343  }'
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:43.343   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:43.911   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:17:43.911   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:43.911   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:43.911  [2024-12-16 11:39:09.680919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:43.911  [2024-12-16 11:39:09.682741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0
00:17:43.911  [2024-12-16 11:39:09.684640] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:43.911   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:43.911   11:39:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:44.851    "name": "raid_bdev1",
00:17:44.851    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:44.851    "strip_size_kb": 0,
00:17:44.851    "state": "online",
00:17:44.851    "raid_level": "raid1",
00:17:44.851    "superblock": true,
00:17:44.851    "num_base_bdevs": 2,
00:17:44.851    "num_base_bdevs_discovered": 2,
00:17:44.851    "num_base_bdevs_operational": 2,
00:17:44.851    "process": {
00:17:44.851      "type": "rebuild",
00:17:44.851      "target": "spare",
00:17:44.851      "progress": {
00:17:44.851        "blocks": 2560,
00:17:44.851        "percent": 32
00:17:44.851      }
00:17:44.851    },
00:17:44.851    "base_bdevs_list": [
00:17:44.851      {
00:17:44.851        "name": "spare",
00:17:44.851        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:44.851        "is_configured": true,
00:17:44.851        "data_offset": 256,
00:17:44.851        "data_size": 7936
00:17:44.851      },
00:17:44.851      {
00:17:44.851        "name": "BaseBdev2",
00:17:44.851        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:44.851        "is_configured": true,
00:17:44.851        "data_offset": 256,
00:17:44.851        "data_size": 7936
00:17:44.851      }
00:17:44.851    ]
00:17:44.851  }'
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:44.851  [2024-12-16 11:39:10.823823] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:44.851  [2024-12-16 11:39:10.890160] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:17:44.851  [2024-12-16 11:39:10.890261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:44.851  [2024-12-16 11:39:10.890296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:44.851  [2024-12-16 11:39:10.890305] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:44.851   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:44.851    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:45.111    11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:45.111   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:45.111    "name": "raid_bdev1",
00:17:45.111    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:45.111    "strip_size_kb": 0,
00:17:45.111    "state": "online",
00:17:45.111    "raid_level": "raid1",
00:17:45.111    "superblock": true,
00:17:45.111    "num_base_bdevs": 2,
00:17:45.111    "num_base_bdevs_discovered": 1,
00:17:45.111    "num_base_bdevs_operational": 1,
00:17:45.111    "base_bdevs_list": [
00:17:45.111      {
00:17:45.111        "name": null,
00:17:45.111        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:45.111        "is_configured": false,
00:17:45.111        "data_offset": 0,
00:17:45.111        "data_size": 7936
00:17:45.111      },
00:17:45.111      {
00:17:45.111        "name": "BaseBdev2",
00:17:45.111        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:45.111        "is_configured": true,
00:17:45.111        "data_offset": 256,
00:17:45.111        "data_size": 7936
00:17:45.111      }
00:17:45.111    ]
00:17:45.111  }'
00:17:45.111   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:45.111   11:39:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:45.371   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:45.371   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:45.371   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:45.371   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:45.371   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:45.371    11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:45.371    11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:45.371    11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:45.371    11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:45.371    11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:45.371   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:45.371    "name": "raid_bdev1",
00:17:45.371    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:45.371    "strip_size_kb": 0,
00:17:45.371    "state": "online",
00:17:45.371    "raid_level": "raid1",
00:17:45.371    "superblock": true,
00:17:45.371    "num_base_bdevs": 2,
00:17:45.371    "num_base_bdevs_discovered": 1,
00:17:45.371    "num_base_bdevs_operational": 1,
00:17:45.371    "base_bdevs_list": [
00:17:45.371      {
00:17:45.371        "name": null,
00:17:45.371        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:45.371        "is_configured": false,
00:17:45.371        "data_offset": 0,
00:17:45.371        "data_size": 7936
00:17:45.371      },
00:17:45.371      {
00:17:45.371        "name": "BaseBdev2",
00:17:45.371        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:45.371        "is_configured": true,
00:17:45.371        "data_offset": 256,
00:17:45.371        "data_size": 7936
00:17:45.371      }
00:17:45.371    ]
00:17:45.371  }'
00:17:45.371    11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:45.632   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:45.632    11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:45.632   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:45.632   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:17:45.632   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:45.632   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:45.632  [2024-12-16 11:39:11.540497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:45.632  [2024-12-16 11:39:11.542363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190
00:17:45.632  [2024-12-16 11:39:11.544288] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:45.632   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:45.632   11:39:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1
00:17:46.571   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:46.571   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:46.571   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:46.571   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:46.571   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:46.571    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:46.571    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:46.571    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:46.571    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:46.571    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:46.571   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:46.571    "name": "raid_bdev1",
00:17:46.571    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:46.571    "strip_size_kb": 0,
00:17:46.571    "state": "online",
00:17:46.571    "raid_level": "raid1",
00:17:46.571    "superblock": true,
00:17:46.571    "num_base_bdevs": 2,
00:17:46.571    "num_base_bdevs_discovered": 2,
00:17:46.571    "num_base_bdevs_operational": 2,
00:17:46.571    "process": {
00:17:46.571      "type": "rebuild",
00:17:46.571      "target": "spare",
00:17:46.571      "progress": {
00:17:46.571        "blocks": 2560,
00:17:46.571        "percent": 32
00:17:46.571      }
00:17:46.571    },
00:17:46.571    "base_bdevs_list": [
00:17:46.571      {
00:17:46.571        "name": "spare",
00:17:46.571        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:46.571        "is_configured": true,
00:17:46.571        "data_offset": 256,
00:17:46.571        "data_size": 7936
00:17:46.571      },
00:17:46.571      {
00:17:46.571        "name": "BaseBdev2",
00:17:46.571        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:46.571        "is_configured": true,
00:17:46.571        "data_offset": 256,
00:17:46.571        "data_size": 7936
00:17:46.571      }
00:17:46.571    ]
00:17:46.571  }'
00:17:46.571    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:46.831    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:17:46.831  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']'
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=605
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:46.831   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:46.832   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:46.832    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:46.832    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:46.832    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:46.832    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:46.832    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:46.832   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:46.832    "name": "raid_bdev1",
00:17:46.832    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:46.832    "strip_size_kb": 0,
00:17:46.832    "state": "online",
00:17:46.832    "raid_level": "raid1",
00:17:46.832    "superblock": true,
00:17:46.832    "num_base_bdevs": 2,
00:17:46.832    "num_base_bdevs_discovered": 2,
00:17:46.832    "num_base_bdevs_operational": 2,
00:17:46.832    "process": {
00:17:46.832      "type": "rebuild",
00:17:46.832      "target": "spare",
00:17:46.832      "progress": {
00:17:46.832        "blocks": 2816,
00:17:46.832        "percent": 35
00:17:46.832      }
00:17:46.832    },
00:17:46.832    "base_bdevs_list": [
00:17:46.832      {
00:17:46.832        "name": "spare",
00:17:46.832        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:46.832        "is_configured": true,
00:17:46.832        "data_offset": 256,
00:17:46.832        "data_size": 7936
00:17:46.832      },
00:17:46.832      {
00:17:46.832        "name": "BaseBdev2",
00:17:46.832        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:46.832        "is_configured": true,
00:17:46.832        "data_offset": 256,
00:17:46.832        "data_size": 7936
00:17:46.832      }
00:17:46.832    ]
00:17:46.832  }'
00:17:46.832    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:46.832   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:46.832    11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:46.832   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:46.832   11:39:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1
00:17:47.771   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:17:47.771   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:47.771   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:47.771   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:47.771   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:47.771   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:47.771    11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:48.031    11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:48.031    11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:48.031    11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:48.031    11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:48.031   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:48.031    "name": "raid_bdev1",
00:17:48.031    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:48.031    "strip_size_kb": 0,
00:17:48.031    "state": "online",
00:17:48.031    "raid_level": "raid1",
00:17:48.031    "superblock": true,
00:17:48.031    "num_base_bdevs": 2,
00:17:48.031    "num_base_bdevs_discovered": 2,
00:17:48.031    "num_base_bdevs_operational": 2,
00:17:48.031    "process": {
00:17:48.031      "type": "rebuild",
00:17:48.031      "target": "spare",
00:17:48.031      "progress": {
00:17:48.031        "blocks": 5632,
00:17:48.031        "percent": 70
00:17:48.031      }
00:17:48.031    },
00:17:48.031    "base_bdevs_list": [
00:17:48.031      {
00:17:48.031        "name": "spare",
00:17:48.031        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:48.031        "is_configured": true,
00:17:48.031        "data_offset": 256,
00:17:48.031        "data_size": 7936
00:17:48.031      },
00:17:48.031      {
00:17:48.031        "name": "BaseBdev2",
00:17:48.031        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:48.031        "is_configured": true,
00:17:48.031        "data_offset": 256,
00:17:48.031        "data_size": 7936
00:17:48.031      }
00:17:48.031    ]
00:17:48.031  }'
00:17:48.031    11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:48.031   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:48.031    11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:48.031   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:48.031   11:39:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1
00:17:48.623  [2024-12-16 11:39:14.657448] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:17:48.623  [2024-12-16 11:39:14.657627] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:17:48.623  [2024-12-16 11:39:14.657798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:49.194   11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:17:49.194   11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:49.194   11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:49.194   11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:49.194   11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:49.194   11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:49.194    11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:49.194    11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:49.194    11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:49.194    11:39:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:49.194    "name": "raid_bdev1",
00:17:49.194    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:49.194    "strip_size_kb": 0,
00:17:49.194    "state": "online",
00:17:49.194    "raid_level": "raid1",
00:17:49.194    "superblock": true,
00:17:49.194    "num_base_bdevs": 2,
00:17:49.194    "num_base_bdevs_discovered": 2,
00:17:49.194    "num_base_bdevs_operational": 2,
00:17:49.194    "base_bdevs_list": [
00:17:49.194      {
00:17:49.194        "name": "spare",
00:17:49.194        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:49.194        "is_configured": true,
00:17:49.194        "data_offset": 256,
00:17:49.194        "data_size": 7936
00:17:49.194      },
00:17:49.194      {
00:17:49.194        "name": "BaseBdev2",
00:17:49.194        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:49.194        "is_configured": true,
00:17:49.194        "data_offset": 256,
00:17:49.194        "data_size": 7936
00:17:49.194      }
00:17:49.194    ]
00:17:49.194  }'
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:49.194    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:49.194   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:49.194    "name": "raid_bdev1",
00:17:49.194    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:49.194    "strip_size_kb": 0,
00:17:49.194    "state": "online",
00:17:49.194    "raid_level": "raid1",
00:17:49.194    "superblock": true,
00:17:49.194    "num_base_bdevs": 2,
00:17:49.194    "num_base_bdevs_discovered": 2,
00:17:49.194    "num_base_bdevs_operational": 2,
00:17:49.194    "base_bdevs_list": [
00:17:49.194      {
00:17:49.194        "name": "spare",
00:17:49.194        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:49.194        "is_configured": true,
00:17:49.194        "data_offset": 256,
00:17:49.194        "data_size": 7936
00:17:49.194      },
00:17:49.194      {
00:17:49.194        "name": "BaseBdev2",
00:17:49.195        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:49.195        "is_configured": true,
00:17:49.195        "data_offset": 256,
00:17:49.195        "data_size": 7936
00:17:49.195      }
00:17:49.195    ]
00:17:49.195  }'
00:17:49.195    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:49.195   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:49.195    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:49.454   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:49.454   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:49.455    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:49.455    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:49.455    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:49.455    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:49.455    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:49.455    "name": "raid_bdev1",
00:17:49.455    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:49.455    "strip_size_kb": 0,
00:17:49.455    "state": "online",
00:17:49.455    "raid_level": "raid1",
00:17:49.455    "superblock": true,
00:17:49.455    "num_base_bdevs": 2,
00:17:49.455    "num_base_bdevs_discovered": 2,
00:17:49.455    "num_base_bdevs_operational": 2,
00:17:49.455    "base_bdevs_list": [
00:17:49.455      {
00:17:49.455        "name": "spare",
00:17:49.455        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:49.455        "is_configured": true,
00:17:49.455        "data_offset": 256,
00:17:49.455        "data_size": 7936
00:17:49.455      },
00:17:49.455      {
00:17:49.455        "name": "BaseBdev2",
00:17:49.455        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:49.455        "is_configured": true,
00:17:49.455        "data_offset": 256,
00:17:49.455        "data_size": 7936
00:17:49.455      }
00:17:49.455    ]
00:17:49.455  }'
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:49.455   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:49.715  [2024-12-16 11:39:15.712625] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:49.715  [2024-12-16 11:39:15.712720] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:49.715  [2024-12-16 11:39:15.712849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:49.715  [2024-12-16 11:39:15.712945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:49.715  [2024-12-16 11:39:15.712998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:49.715    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:49.715    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length
00:17:49.715    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:49.715    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:49.715    11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']'
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']'
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:17:49.715   11:39:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:17:49.975  /dev/nbd0
00:17:49.975    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:17:49.975   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:17:49.975  1+0 records in
00:17:49.975  1+0 records out
00:17:49.975  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349042 s, 11.7 MB/s
00:17:49.975    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1
00:17:50.235  /dev/nbd1
00:17:50.235    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:17:50.235  1+0 records in
00:17:50.235  1+0 records out
00:17:50.235  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039782 s, 10.3 MB/s
00:17:50.235    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:17:50.235   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:17:50.495   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1'
00:17:50.495   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock
00:17:50.495   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:17:50.495   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list
00:17:50.495   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i
00:17:50.495   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:17:50.495   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0
00:17:50.495    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1
00:17:50.754    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:50.754  [2024-12-16 11:39:16.783741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:17:50.754  [2024-12-16 11:39:16.783800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:50.754  [2024-12-16 11:39:16.783833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:17:50.754  [2024-12-16 11:39:16.783847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:50.754  [2024-12-16 11:39:16.785805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:50.754  [2024-12-16 11:39:16.785842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:17:50.754  [2024-12-16 11:39:16.785897] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:17:50.754  [2024-12-16 11:39:16.785934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:50.754  [2024-12-16 11:39:16.786039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:50.754  spare
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:50.754   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:51.014  [2024-12-16 11:39:16.885949] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:17:51.014  [2024-12-16 11:39:16.885981] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096
00:17:51.014  [2024-12-16 11:39:16.886109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0
00:17:51.014  [2024-12-16 11:39:16.886254] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:17:51.014  [2024-12-16 11:39:16.886279] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:17:51.014  [2024-12-16 11:39:16.886396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:51.014    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:51.014    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:51.014    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:51.014    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:51.014    11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:51.014    "name": "raid_bdev1",
00:17:51.014    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:51.014    "strip_size_kb": 0,
00:17:51.014    "state": "online",
00:17:51.014    "raid_level": "raid1",
00:17:51.014    "superblock": true,
00:17:51.014    "num_base_bdevs": 2,
00:17:51.014    "num_base_bdevs_discovered": 2,
00:17:51.014    "num_base_bdevs_operational": 2,
00:17:51.014    "base_bdevs_list": [
00:17:51.014      {
00:17:51.014        "name": "spare",
00:17:51.014        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:51.014        "is_configured": true,
00:17:51.014        "data_offset": 256,
00:17:51.014        "data_size": 7936
00:17:51.014      },
00:17:51.014      {
00:17:51.014        "name": "BaseBdev2",
00:17:51.014        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:51.014        "is_configured": true,
00:17:51.014        "data_offset": 256,
00:17:51.014        "data_size": 7936
00:17:51.014      }
00:17:51.014    ]
00:17:51.014  }'
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:51.014   11:39:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:51.274   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:51.274   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:51.274   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:51.274   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:51.274   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:51.274    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:51.274    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:51.274    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:51.274    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:51.274    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:51.534    "name": "raid_bdev1",
00:17:51.534    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:51.534    "strip_size_kb": 0,
00:17:51.534    "state": "online",
00:17:51.534    "raid_level": "raid1",
00:17:51.534    "superblock": true,
00:17:51.534    "num_base_bdevs": 2,
00:17:51.534    "num_base_bdevs_discovered": 2,
00:17:51.534    "num_base_bdevs_operational": 2,
00:17:51.534    "base_bdevs_list": [
00:17:51.534      {
00:17:51.534        "name": "spare",
00:17:51.534        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:51.534        "is_configured": true,
00:17:51.534        "data_offset": 256,
00:17:51.534        "data_size": 7936
00:17:51.534      },
00:17:51.534      {
00:17:51.534        "name": "BaseBdev2",
00:17:51.534        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:51.534        "is_configured": true,
00:17:51.534        "data_offset": 256,
00:17:51.534        "data_size": 7936
00:17:51.534      }
00:17:51.534    ]
00:17:51.534  }'
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:51.534  [2024-12-16 11:39:17.494600] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:51.534    11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:51.534   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:51.534    "name": "raid_bdev1",
00:17:51.534    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:51.534    "strip_size_kb": 0,
00:17:51.534    "state": "online",
00:17:51.534    "raid_level": "raid1",
00:17:51.534    "superblock": true,
00:17:51.535    "num_base_bdevs": 2,
00:17:51.535    "num_base_bdevs_discovered": 1,
00:17:51.535    "num_base_bdevs_operational": 1,
00:17:51.535    "base_bdevs_list": [
00:17:51.535      {
00:17:51.535        "name": null,
00:17:51.535        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:51.535        "is_configured": false,
00:17:51.535        "data_offset": 0,
00:17:51.535        "data_size": 7936
00:17:51.535      },
00:17:51.535      {
00:17:51.535        "name": "BaseBdev2",
00:17:51.535        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:51.535        "is_configured": true,
00:17:51.535        "data_offset": 256,
00:17:51.535        "data_size": 7936
00:17:51.535      }
00:17:51.535    ]
00:17:51.535  }'
00:17:51.535   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:51.535   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:52.105   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:17:52.105   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:52.105   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:52.105  [2024-12-16 11:39:17.889928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:52.105  [2024-12-16 11:39:17.890114] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:17:52.105  [2024-12-16 11:39:17.890138] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:17:52.105  [2024-12-16 11:39:17.890188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:52.105  [2024-12-16 11:39:17.891854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80
00:17:52.105  [2024-12-16 11:39:17.893870] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:52.105   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:52.105   11:39:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1
00:17:53.044   11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:53.044   11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:53.044   11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:53.044   11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:53.044   11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:53.044    11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:53.044    11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:53.044    11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:53.044    11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:53.044    11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.044   11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:53.044    "name": "raid_bdev1",
00:17:53.044    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:53.044    "strip_size_kb": 0,
00:17:53.044    "state": "online",
00:17:53.044    "raid_level": "raid1",
00:17:53.044    "superblock": true,
00:17:53.044    "num_base_bdevs": 2,
00:17:53.044    "num_base_bdevs_discovered": 2,
00:17:53.044    "num_base_bdevs_operational": 2,
00:17:53.044    "process": {
00:17:53.044      "type": "rebuild",
00:17:53.044      "target": "spare",
00:17:53.044      "progress": {
00:17:53.044        "blocks": 2560,
00:17:53.044        "percent": 32
00:17:53.044      }
00:17:53.044    },
00:17:53.044    "base_bdevs_list": [
00:17:53.044      {
00:17:53.044        "name": "spare",
00:17:53.044        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:53.044        "is_configured": true,
00:17:53.044        "data_offset": 256,
00:17:53.044        "data_size": 7936
00:17:53.044      },
00:17:53.044      {
00:17:53.044        "name": "BaseBdev2",
00:17:53.044        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:53.044        "is_configured": true,
00:17:53.044        "data_offset": 256,
00:17:53.044        "data_size": 7936
00:17:53.044      }
00:17:53.044    ]
00:17:53.044  }'
00:17:53.044    11:39:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:53.044    11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:53.044  [2024-12-16 11:39:19.044585] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:53.044  [2024-12-16 11:39:19.098303] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:17:53.044  [2024-12-16 11:39:19.098363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:53.044  [2024-12-16 11:39:19.098396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:53.044  [2024-12-16 11:39:19.098403] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:53.044   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:53.304    11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:53.304    11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:53.304    11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:53.304    11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:53.304    11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.304   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:53.304    "name": "raid_bdev1",
00:17:53.304    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:53.304    "strip_size_kb": 0,
00:17:53.304    "state": "online",
00:17:53.304    "raid_level": "raid1",
00:17:53.304    "superblock": true,
00:17:53.304    "num_base_bdevs": 2,
00:17:53.304    "num_base_bdevs_discovered": 1,
00:17:53.304    "num_base_bdevs_operational": 1,
00:17:53.304    "base_bdevs_list": [
00:17:53.304      {
00:17:53.304        "name": null,
00:17:53.304        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:53.304        "is_configured": false,
00:17:53.304        "data_offset": 0,
00:17:53.304        "data_size": 7936
00:17:53.304      },
00:17:53.304      {
00:17:53.304        "name": "BaseBdev2",
00:17:53.304        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:53.304        "is_configured": true,
00:17:53.304        "data_offset": 256,
00:17:53.304        "data_size": 7936
00:17:53.304      }
00:17:53.304    ]
00:17:53.304  }'
00:17:53.304   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:53.304   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:53.564   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:17:53.564   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:53.564   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:53.564  [2024-12-16 11:39:19.496767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:17:53.564  [2024-12-16 11:39:19.496833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:53.564  [2024-12-16 11:39:19.496861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:17:53.564  [2024-12-16 11:39:19.496871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:53.564  [2024-12-16 11:39:19.497107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:53.564  [2024-12-16 11:39:19.497129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:17:53.564  [2024-12-16 11:39:19.497194] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:17:53.564  [2024-12-16 11:39:19.497206] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:17:53.564  [2024-12-16 11:39:19.497221] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:17:53.564  [2024-12-16 11:39:19.497248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:17:53.564  [2024-12-16 11:39:19.498866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50
00:17:53.564  [2024-12-16 11:39:19.500725] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:17:53.564  spare
00:17:53.564   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.564   11:39:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1
00:17:54.502   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:17:54.502   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:54.502   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:17:54.502   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare
00:17:54.502   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:54.502    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:54.502    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:54.502    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:54.502    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:54.502    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:54.502   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:54.502    "name": "raid_bdev1",
00:17:54.502    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:54.502    "strip_size_kb": 0,
00:17:54.502    "state": "online",
00:17:54.502    "raid_level": "raid1",
00:17:54.502    "superblock": true,
00:17:54.502    "num_base_bdevs": 2,
00:17:54.502    "num_base_bdevs_discovered": 2,
00:17:54.502    "num_base_bdevs_operational": 2,
00:17:54.502    "process": {
00:17:54.502      "type": "rebuild",
00:17:54.502      "target": "spare",
00:17:54.502      "progress": {
00:17:54.502        "blocks": 2560,
00:17:54.502        "percent": 32
00:17:54.502      }
00:17:54.502    },
00:17:54.502    "base_bdevs_list": [
00:17:54.502      {
00:17:54.502        "name": "spare",
00:17:54.502        "uuid": "eaa7a401-db1a-5877-9273-f9e6cc343eb2",
00:17:54.502        "is_configured": true,
00:17:54.502        "data_offset": 256,
00:17:54.502        "data_size": 7936
00:17:54.502      },
00:17:54.502      {
00:17:54.502        "name": "BaseBdev2",
00:17:54.502        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:54.502        "is_configured": true,
00:17:54.502        "data_offset": 256,
00:17:54.502        "data_size": 7936
00:17:54.502      }
00:17:54.502    ]
00:17:54.502  }'
00:17:54.502    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:17:54.762    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:54.762  [2024-12-16 11:39:20.663418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:54.762  [2024-12-16 11:39:20.705210] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:17:54.762  [2024-12-16 11:39:20.705281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:54.762  [2024-12-16 11:39:20.705295] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:17:54.762  [2024-12-16 11:39:20.705304] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:54.762    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:54.762    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:54.762    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:54.762    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:54.762    11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:54.762   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:54.762    "name": "raid_bdev1",
00:17:54.762    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:54.762    "strip_size_kb": 0,
00:17:54.762    "state": "online",
00:17:54.762    "raid_level": "raid1",
00:17:54.762    "superblock": true,
00:17:54.762    "num_base_bdevs": 2,
00:17:54.762    "num_base_bdevs_discovered": 1,
00:17:54.762    "num_base_bdevs_operational": 1,
00:17:54.762    "base_bdevs_list": [
00:17:54.762      {
00:17:54.762        "name": null,
00:17:54.762        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:54.762        "is_configured": false,
00:17:54.762        "data_offset": 0,
00:17:54.762        "data_size": 7936
00:17:54.762      },
00:17:54.762      {
00:17:54.762        "name": "BaseBdev2",
00:17:54.763        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:54.763        "is_configured": true,
00:17:54.763        "data_offset": 256,
00:17:54.763        "data_size": 7936
00:17:54.763      }
00:17:54.763    ]
00:17:54.763  }'
00:17:54.763   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:54.763   11:39:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:55.332   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:55.332   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:55.332   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:55.332   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:55.332   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:55.332    11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:55.332    11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:55.332    11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:55.332    11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:55.332    11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:55.332   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:55.332    "name": "raid_bdev1",
00:17:55.332    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:55.332    "strip_size_kb": 0,
00:17:55.332    "state": "online",
00:17:55.332    "raid_level": "raid1",
00:17:55.332    "superblock": true,
00:17:55.332    "num_base_bdevs": 2,
00:17:55.332    "num_base_bdevs_discovered": 1,
00:17:55.332    "num_base_bdevs_operational": 1,
00:17:55.332    "base_bdevs_list": [
00:17:55.332      {
00:17:55.332        "name": null,
00:17:55.332        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:55.332        "is_configured": false,
00:17:55.332        "data_offset": 0,
00:17:55.332        "data_size": 7936
00:17:55.333      },
00:17:55.333      {
00:17:55.333        "name": "BaseBdev2",
00:17:55.333        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:55.333        "is_configured": true,
00:17:55.333        "data_offset": 256,
00:17:55.333        "data_size": 7936
00:17:55.333      }
00:17:55.333    ]
00:17:55.333  }'
00:17:55.333    11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:55.333    11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:55.333  [2024-12-16 11:39:21.243382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:17:55.333  [2024-12-16 11:39:21.243441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:55.333  [2024-12-16 11:39:21.243461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:17:55.333  [2024-12-16 11:39:21.243472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:55.333  [2024-12-16 11:39:21.243681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:55.333  [2024-12-16 11:39:21.243707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:17:55.333  [2024-12-16 11:39:21.243755] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:17:55.333  [2024-12-16 11:39:21.243775] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:17:55.333  [2024-12-16 11:39:21.243791] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:17:55.333  [2024-12-16 11:39:21.243803] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:17:55.333  BaseBdev1
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:55.333   11:39:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:56.272    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:56.272    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:56.272    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:56.272    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:56.272    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:56.272    "name": "raid_bdev1",
00:17:56.272    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:56.272    "strip_size_kb": 0,
00:17:56.272    "state": "online",
00:17:56.272    "raid_level": "raid1",
00:17:56.272    "superblock": true,
00:17:56.272    "num_base_bdevs": 2,
00:17:56.272    "num_base_bdevs_discovered": 1,
00:17:56.272    "num_base_bdevs_operational": 1,
00:17:56.272    "base_bdevs_list": [
00:17:56.272      {
00:17:56.272        "name": null,
00:17:56.272        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:56.272        "is_configured": false,
00:17:56.272        "data_offset": 0,
00:17:56.272        "data_size": 7936
00:17:56.272      },
00:17:56.272      {
00:17:56.272        "name": "BaseBdev2",
00:17:56.272        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:56.272        "is_configured": true,
00:17:56.272        "data_offset": 256,
00:17:56.272        "data_size": 7936
00:17:56.272      }
00:17:56.272    ]
00:17:56.272  }'
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:56.272   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:56.854    "name": "raid_bdev1",
00:17:56.854    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:56.854    "strip_size_kb": 0,
00:17:56.854    "state": "online",
00:17:56.854    "raid_level": "raid1",
00:17:56.854    "superblock": true,
00:17:56.854    "num_base_bdevs": 2,
00:17:56.854    "num_base_bdevs_discovered": 1,
00:17:56.854    "num_base_bdevs_operational": 1,
00:17:56.854    "base_bdevs_list": [
00:17:56.854      {
00:17:56.854        "name": null,
00:17:56.854        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:56.854        "is_configured": false,
00:17:56.854        "data_offset": 0,
00:17:56.854        "data_size": 7936
00:17:56.854      },
00:17:56.854      {
00:17:56.854        "name": "BaseBdev2",
00:17:56.854        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:56.854        "is_configured": true,
00:17:56.854        "data_offset": 256,
00:17:56.854        "data_size": 7936
00:17:56.854      }
00:17:56.854    ]
00:17:56.854  }'
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:56.854    11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:56.854  [2024-12-16 11:39:22.808743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:56.854  [2024-12-16 11:39:22.808910] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:17:56.854  [2024-12-16 11:39:22.808927] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:17:56.854  request:
00:17:56.854  {
00:17:56.854  "base_bdev": "BaseBdev1",
00:17:56.854  "raid_bdev": "raid_bdev1",
00:17:56.854  "method": "bdev_raid_add_base_bdev",
00:17:56.854  "req_id": 1
00:17:56.854  }
00:17:56.854  Got JSON-RPC error response
00:17:56.854  response:
00:17:56.854  {
00:17:56.854  "code": -22,
00:17:56.854  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:17:56.854  }
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:56.854   11:39:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:57.794   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:57.794    11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:57.794    11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:57.794    11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:57.794    11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:57.794    11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:58.055   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:58.055    "name": "raid_bdev1",
00:17:58.055    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:58.055    "strip_size_kb": 0,
00:17:58.055    "state": "online",
00:17:58.055    "raid_level": "raid1",
00:17:58.055    "superblock": true,
00:17:58.055    "num_base_bdevs": 2,
00:17:58.055    "num_base_bdevs_discovered": 1,
00:17:58.055    "num_base_bdevs_operational": 1,
00:17:58.055    "base_bdevs_list": [
00:17:58.055      {
00:17:58.055        "name": null,
00:17:58.055        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:58.055        "is_configured": false,
00:17:58.055        "data_offset": 0,
00:17:58.055        "data_size": 7936
00:17:58.055      },
00:17:58.055      {
00:17:58.055        "name": "BaseBdev2",
00:17:58.055        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:58.055        "is_configured": true,
00:17:58.055        "data_offset": 256,
00:17:58.055        "data_size": 7936
00:17:58.055      }
00:17:58.055    ]
00:17:58.055  }'
00:17:58.055   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:58.055   11:39:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:58.315   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:17:58.315   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:17:58.315   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:17:58.315   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none
00:17:58.315   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:17:58.315    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:58.315    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:58.315    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:58.315    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:58.315    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:58.315   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:17:58.315    "name": "raid_bdev1",
00:17:58.315    "uuid": "534cce03-d069-42c9-b2b1-0f48714b7aa3",
00:17:58.315    "strip_size_kb": 0,
00:17:58.315    "state": "online",
00:17:58.315    "raid_level": "raid1",
00:17:58.315    "superblock": true,
00:17:58.315    "num_base_bdevs": 2,
00:17:58.315    "num_base_bdevs_discovered": 1,
00:17:58.315    "num_base_bdevs_operational": 1,
00:17:58.315    "base_bdevs_list": [
00:17:58.315      {
00:17:58.315        "name": null,
00:17:58.315        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:58.315        "is_configured": false,
00:17:58.315        "data_offset": 0,
00:17:58.315        "data_size": 7936
00:17:58.315      },
00:17:58.315      {
00:17:58.315        "name": "BaseBdev2",
00:17:58.315        "uuid": "4336b89f-b418-58b9-9b6f-1dddf32e94cd",
00:17:58.315        "is_configured": true,
00:17:58.315        "data_offset": 256,
00:17:58.315        "data_size": 7936
00:17:58.315      }
00:17:58.315    ]
00:17:58.315  }'
00:17:58.315    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:17:58.315   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:17:58.315    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98471
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98471 ']'
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98471
00:17:58.575    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:17:58.575    11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98471
00:17:58.575  killing process with pid 98471
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98471'
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98471
00:17:58.575  Received shutdown signal, test time was about 60.000000 seconds
00:17:58.575  
00:17:58.575                                                                                                  Latency(us)
00:17:58.575  
[2024-12-16T11:39:24.642Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:58.575  
[2024-12-16T11:39:24.642Z]  ===================================================================================================================
00:17:58.575  
[2024-12-16T11:39:24.642Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:17:58.575   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98471
00:17:58.575  [2024-12-16 11:39:24.421092] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:58.575  [2024-12-16 11:39:24.421229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:58.575  [2024-12-16 11:39:24.421295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:58.575  [2024-12-16 11:39:24.421306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:17:58.575  [2024-12-16 11:39:24.454490] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:58.835   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0
00:17:58.835  
00:17:58.835  real	0m18.137s
00:17:58.835  user	0m24.126s
00:17:58.835  sys	0m2.536s
00:17:58.835   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable
00:17:58.835   11:39:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x
00:17:58.835  ************************************
00:17:58.835  END TEST raid_rebuild_test_sb_md_separate
00:17:58.835  ************************************
00:17:58.835   11:39:24 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i'
00:17:58.835   11:39:24 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true
00:17:58.835   11:39:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:17:58.835   11:39:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:17:58.835   11:39:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:17:58.835  ************************************
00:17:58.835  START TEST raid_state_function_test_sb_md_interleaved
00:17:58.835  ************************************
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 ))
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ ))
00:17:58.835    11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs ))
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']'
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']'
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99146
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid
00:17:58.835  Process raid pid: 99146
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99146'
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99146
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99146 ']'
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100
00:17:58.835  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable
00:17:58.835   11:39:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:17:58.835  [2024-12-16 11:39:24.849176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:17:58.835  [2024-12-16 11:39:24.849325] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:59.094  [2024-12-16 11:39:25.015627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:59.094  [2024-12-16 11:39:25.062848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:17:59.094  [2024-12-16 11:39:25.106727] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:59.094  [2024-12-16 11:39:25.106768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:17:59.662  [2024-12-16 11:39:25.708104] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:59.662  [2024-12-16 11:39:25.708158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:59.662  [2024-12-16 11:39:25.708171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:59.662  [2024-12-16 11:39:25.708181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:17:59.662   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:17:59.663    11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:17:59.663    11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:59.663    11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:17:59.663    11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:17:59.922    11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:59.922   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:17:59.922    "name": "Existed_Raid",
00:17:59.922    "uuid": "c1f60c4e-6626-4387-bab2-478b921ecec2",
00:17:59.922    "strip_size_kb": 0,
00:17:59.922    "state": "configuring",
00:17:59.922    "raid_level": "raid1",
00:17:59.922    "superblock": true,
00:17:59.922    "num_base_bdevs": 2,
00:17:59.922    "num_base_bdevs_discovered": 0,
00:17:59.922    "num_base_bdevs_operational": 2,
00:17:59.922    "base_bdevs_list": [
00:17:59.922      {
00:17:59.922        "name": "BaseBdev1",
00:17:59.922        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.922        "is_configured": false,
00:17:59.922        "data_offset": 0,
00:17:59.922        "data_size": 0
00:17:59.922      },
00:17:59.922      {
00:17:59.922        "name": "BaseBdev2",
00:17:59.922        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.922        "is_configured": false,
00:17:59.922        "data_offset": 0,
00:17:59.922        "data_size": 0
00:17:59.922      }
00:17:59.922    ]
00:17:59.922  }'
00:17:59.922   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:17:59.922   11:39:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.182  [2024-12-16 11:39:26.175287] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:18:00.182  [2024-12-16 11:39:26.175349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.182  [2024-12-16 11:39:26.187296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:18:00.182  [2024-12-16 11:39:26.187344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:18:00.182  [2024-12-16 11:39:26.187352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:00.182  [2024-12-16 11:39:26.187361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.182  [2024-12-16 11:39:26.208317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:00.182  BaseBdev1
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.182  [
00:18:00.182  {
00:18:00.182  "name": "BaseBdev1",
00:18:00.182  "aliases": [
00:18:00.182  "206317a8-7256-4e3f-acc4-f9ca860c3644"
00:18:00.182  ],
00:18:00.182  "product_name": "Malloc disk",
00:18:00.182  "block_size": 4128,
00:18:00.182  "num_blocks": 8192,
00:18:00.182  "uuid": "206317a8-7256-4e3f-acc4-f9ca860c3644",
00:18:00.182  "md_size": 32,
00:18:00.182  "md_interleave": true,
00:18:00.182  "dif_type": 0,
00:18:00.182  "assigned_rate_limits": {
00:18:00.182  "rw_ios_per_sec": 0,
00:18:00.182  "rw_mbytes_per_sec": 0,
00:18:00.182  "r_mbytes_per_sec": 0,
00:18:00.182  "w_mbytes_per_sec": 0
00:18:00.182  },
00:18:00.182  "claimed": true,
00:18:00.182  "claim_type": "exclusive_write",
00:18:00.182  "zoned": false,
00:18:00.182  "supported_io_types": {
00:18:00.182  "read": true,
00:18:00.182  "write": true,
00:18:00.182  "unmap": true,
00:18:00.182  "flush": true,
00:18:00.182  "reset": true,
00:18:00.182  "nvme_admin": false,
00:18:00.182  "nvme_io": false,
00:18:00.182  "nvme_io_md": false,
00:18:00.182  "write_zeroes": true,
00:18:00.182  "zcopy": true,
00:18:00.182  "get_zone_info": false,
00:18:00.182  "zone_management": false,
00:18:00.182  "zone_append": false,
00:18:00.182  "compare": false,
00:18:00.182  "compare_and_write": false,
00:18:00.182  "abort": true,
00:18:00.182  "seek_hole": false,
00:18:00.182  "seek_data": false,
00:18:00.182  "copy": true,
00:18:00.182  "nvme_iov_md": false
00:18:00.182  },
00:18:00.182  "memory_domains": [
00:18:00.182  {
00:18:00.182  "dma_device_id": "system",
00:18:00.182  "dma_device_type": 1
00:18:00.182  },
00:18:00.182  {
00:18:00.182  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:00.182  "dma_device_type": 2
00:18:00.182  }
00:18:00.182  ],
00:18:00.182  "driver_specific": {}
00:18:00.182  }
00:18:00.182  ]
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:00.182   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:00.442    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:00.442    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:00.442    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.442    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.442    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.442   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:00.442    "name": "Existed_Raid",
00:18:00.442    "uuid": "7761164e-f999-4d0d-8952-991abe34eb05",
00:18:00.442    "strip_size_kb": 0,
00:18:00.442    "state": "configuring",
00:18:00.442    "raid_level": "raid1",
00:18:00.442    "superblock": true,
00:18:00.442    "num_base_bdevs": 2,
00:18:00.442    "num_base_bdevs_discovered": 1,
00:18:00.442    "num_base_bdevs_operational": 2,
00:18:00.442    "base_bdevs_list": [
00:18:00.442      {
00:18:00.442        "name": "BaseBdev1",
00:18:00.442        "uuid": "206317a8-7256-4e3f-acc4-f9ca860c3644",
00:18:00.442        "is_configured": true,
00:18:00.442        "data_offset": 256,
00:18:00.442        "data_size": 7936
00:18:00.442      },
00:18:00.442      {
00:18:00.442        "name": "BaseBdev2",
00:18:00.442        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:00.442        "is_configured": false,
00:18:00.442        "data_offset": 0,
00:18:00.442        "data_size": 0
00:18:00.442      }
00:18:00.442    ]
00:18:00.442  }'
00:18:00.442   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:00.442   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.702  [2024-12-16 11:39:26.715526] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:18:00.702  [2024-12-16 11:39:26.715601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.702  [2024-12-16 11:39:26.727587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:00.702  [2024-12-16 11:39:26.729435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:00.702  [2024-12-16 11:39:26.729479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 ))
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:00.702   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:00.702    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:00.702    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:00.702    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:00.702    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:00.702    11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:00.961   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:00.961    "name": "Existed_Raid",
00:18:00.961    "uuid": "243718c3-7194-4494-a2a9-aa96ea5e16bc",
00:18:00.961    "strip_size_kb": 0,
00:18:00.961    "state": "configuring",
00:18:00.961    "raid_level": "raid1",
00:18:00.961    "superblock": true,
00:18:00.961    "num_base_bdevs": 2,
00:18:00.961    "num_base_bdevs_discovered": 1,
00:18:00.961    "num_base_bdevs_operational": 2,
00:18:00.961    "base_bdevs_list": [
00:18:00.961      {
00:18:00.961        "name": "BaseBdev1",
00:18:00.961        "uuid": "206317a8-7256-4e3f-acc4-f9ca860c3644",
00:18:00.961        "is_configured": true,
00:18:00.961        "data_offset": 256,
00:18:00.961        "data_size": 7936
00:18:00.961      },
00:18:00.961      {
00:18:00.961        "name": "BaseBdev2",
00:18:00.961        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:00.961        "is_configured": false,
00:18:00.961        "data_offset": 0,
00:18:00.961        "data_size": 0
00:18:00.961      }
00:18:00.961    ]
00:18:00.961  }'
00:18:00.961   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:00.961   11:39:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:01.221   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2
00:18:01.221   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:01.221   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:01.221  [2024-12-16 11:39:27.233889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:01.221  [2024-12-16 11:39:27.234188] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:18:01.221  [2024-12-16 11:39:27.234229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128
00:18:01.221  [2024-12-16 11:39:27.234404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:18:01.222  [2024-12-16 11:39:27.234554] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:18:01.222  [2024-12-16 11:39:27.234588] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980
00:18:01.222  BaseBdev2
00:18:01.222  [2024-12-16 11:39:27.234690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout=
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]]
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:01.222  [
00:18:01.222  {
00:18:01.222  "name": "BaseBdev2",
00:18:01.222  "aliases": [
00:18:01.222  "c8f14769-e4cd-4723-a6a6-a880a6f4d638"
00:18:01.222  ],
00:18:01.222  "product_name": "Malloc disk",
00:18:01.222  "block_size": 4128,
00:18:01.222  "num_blocks": 8192,
00:18:01.222  "uuid": "c8f14769-e4cd-4723-a6a6-a880a6f4d638",
00:18:01.222  "md_size": 32,
00:18:01.222  "md_interleave": true,
00:18:01.222  "dif_type": 0,
00:18:01.222  "assigned_rate_limits": {
00:18:01.222  "rw_ios_per_sec": 0,
00:18:01.222  "rw_mbytes_per_sec": 0,
00:18:01.222  "r_mbytes_per_sec": 0,
00:18:01.222  "w_mbytes_per_sec": 0
00:18:01.222  },
00:18:01.222  "claimed": true,
00:18:01.222  "claim_type": "exclusive_write",
00:18:01.222  "zoned": false,
00:18:01.222  "supported_io_types": {
00:18:01.222  "read": true,
00:18:01.222  "write": true,
00:18:01.222  "unmap": true,
00:18:01.222  "flush": true,
00:18:01.222  "reset": true,
00:18:01.222  "nvme_admin": false,
00:18:01.222  "nvme_io": false,
00:18:01.222  "nvme_io_md": false,
00:18:01.222  "write_zeroes": true,
00:18:01.222  "zcopy": true,
00:18:01.222  "get_zone_info": false,
00:18:01.222  "zone_management": false,
00:18:01.222  "zone_append": false,
00:18:01.222  "compare": false,
00:18:01.222  "compare_and_write": false,
00:18:01.222  "abort": true,
00:18:01.222  "seek_hole": false,
00:18:01.222  "seek_data": false,
00:18:01.222  "copy": true,
00:18:01.222  "nvme_iov_md": false
00:18:01.222  },
00:18:01.222  "memory_domains": [
00:18:01.222  {
00:18:01.222  "dma_device_id": "system",
00:18:01.222  "dma_device_type": 1
00:18:01.222  },
00:18:01.222  {
00:18:01.222  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:01.222  "dma_device_type": 2
00:18:01.222  }
00:18:01.222  ],
00:18:01.222  "driver_specific": {}
00:18:01.222  }
00:18:01.222  ]
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ ))
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs ))
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:01.222   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:01.222    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:01.222    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:01.222    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:01.222    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:01.482    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:01.482   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:01.482    "name": "Existed_Raid",
00:18:01.482    "uuid": "243718c3-7194-4494-a2a9-aa96ea5e16bc",
00:18:01.482    "strip_size_kb": 0,
00:18:01.482    "state": "online",
00:18:01.482    "raid_level": "raid1",
00:18:01.482    "superblock": true,
00:18:01.482    "num_base_bdevs": 2,
00:18:01.482    "num_base_bdevs_discovered": 2,
00:18:01.482    "num_base_bdevs_operational": 2,
00:18:01.482    "base_bdevs_list": [
00:18:01.482      {
00:18:01.482        "name": "BaseBdev1",
00:18:01.482        "uuid": "206317a8-7256-4e3f-acc4-f9ca860c3644",
00:18:01.482        "is_configured": true,
00:18:01.482        "data_offset": 256,
00:18:01.482        "data_size": 7936
00:18:01.482      },
00:18:01.482      {
00:18:01.482        "name": "BaseBdev2",
00:18:01.482        "uuid": "c8f14769-e4cd-4723-a6a6-a880a6f4d638",
00:18:01.482        "is_configured": true,
00:18:01.482        "data_offset": 256,
00:18:01.482        "data_size": 7936
00:18:01.482      }
00:18:01.482    ]
00:18:01.482  }'
00:18:01.482   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:01.482   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:01.742   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid
00:18:01.742   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid
00:18:01.742   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:18:01.742   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:18:01.742   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name
00:18:01.742   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:18:01.742    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid
00:18:01.742    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:18:01.742    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:01.742    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:01.742  [2024-12-16 11:39:27.737384] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:01.742    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:01.742   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:18:01.742    "name": "Existed_Raid",
00:18:01.742    "aliases": [
00:18:01.742      "243718c3-7194-4494-a2a9-aa96ea5e16bc"
00:18:01.742    ],
00:18:01.742    "product_name": "Raid Volume",
00:18:01.742    "block_size": 4128,
00:18:01.742    "num_blocks": 7936,
00:18:01.742    "uuid": "243718c3-7194-4494-a2a9-aa96ea5e16bc",
00:18:01.742    "md_size": 32,
00:18:01.742    "md_interleave": true,
00:18:01.742    "dif_type": 0,
00:18:01.742    "assigned_rate_limits": {
00:18:01.742      "rw_ios_per_sec": 0,
00:18:01.742      "rw_mbytes_per_sec": 0,
00:18:01.742      "r_mbytes_per_sec": 0,
00:18:01.742      "w_mbytes_per_sec": 0
00:18:01.742    },
00:18:01.742    "claimed": false,
00:18:01.742    "zoned": false,
00:18:01.742    "supported_io_types": {
00:18:01.742      "read": true,
00:18:01.742      "write": true,
00:18:01.742      "unmap": false,
00:18:01.742      "flush": false,
00:18:01.742      "reset": true,
00:18:01.742      "nvme_admin": false,
00:18:01.742      "nvme_io": false,
00:18:01.742      "nvme_io_md": false,
00:18:01.742      "write_zeroes": true,
00:18:01.742      "zcopy": false,
00:18:01.742      "get_zone_info": false,
00:18:01.742      "zone_management": false,
00:18:01.742      "zone_append": false,
00:18:01.742      "compare": false,
00:18:01.742      "compare_and_write": false,
00:18:01.742      "abort": false,
00:18:01.742      "seek_hole": false,
00:18:01.742      "seek_data": false,
00:18:01.742      "copy": false,
00:18:01.742      "nvme_iov_md": false
00:18:01.742    },
00:18:01.742    "memory_domains": [
00:18:01.742      {
00:18:01.742        "dma_device_id": "system",
00:18:01.742        "dma_device_type": 1
00:18:01.742      },
00:18:01.742      {
00:18:01.742        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:01.742        "dma_device_type": 2
00:18:01.742      },
00:18:01.742      {
00:18:01.742        "dma_device_id": "system",
00:18:01.742        "dma_device_type": 1
00:18:01.742      },
00:18:01.742      {
00:18:01.742        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:01.742        "dma_device_type": 2
00:18:01.742      }
00:18:01.742    ],
00:18:01.742    "driver_specific": {
00:18:01.742      "raid": {
00:18:01.742        "uuid": "243718c3-7194-4494-a2a9-aa96ea5e16bc",
00:18:01.742        "strip_size_kb": 0,
00:18:01.742        "state": "online",
00:18:01.742        "raid_level": "raid1",
00:18:01.742        "superblock": true,
00:18:01.742        "num_base_bdevs": 2,
00:18:01.742        "num_base_bdevs_discovered": 2,
00:18:01.742        "num_base_bdevs_operational": 2,
00:18:01.742        "base_bdevs_list": [
00:18:01.742          {
00:18:01.742            "name": "BaseBdev1",
00:18:01.742            "uuid": "206317a8-7256-4e3f-acc4-f9ca860c3644",
00:18:01.742            "is_configured": true,
00:18:01.742            "data_offset": 256,
00:18:01.742            "data_size": 7936
00:18:01.742          },
00:18:01.742          {
00:18:01.742            "name": "BaseBdev2",
00:18:01.742            "uuid": "c8f14769-e4cd-4723-a6a6-a880a6f4d638",
00:18:01.742            "is_configured": true,
00:18:01.742            "data_offset": 256,
00:18:01.742            "data_size": 7936
00:18:01.742          }
00:18:01.742        ]
00:18:01.742      }
00:18:01.742    }
00:18:01.742  }'
00:18:01.742    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1
00:18:02.002  BaseBdev2'
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0'
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0'
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]]
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0'
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]]
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.002  [2024-12-16 11:39:27.980754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:02.002   11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:02.002    11:39:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:02.002    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:02.002    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.002    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.002    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.002   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:02.002    "name": "Existed_Raid",
00:18:02.002    "uuid": "243718c3-7194-4494-a2a9-aa96ea5e16bc",
00:18:02.002    "strip_size_kb": 0,
00:18:02.002    "state": "online",
00:18:02.002    "raid_level": "raid1",
00:18:02.002    "superblock": true,
00:18:02.002    "num_base_bdevs": 2,
00:18:02.002    "num_base_bdevs_discovered": 1,
00:18:02.002    "num_base_bdevs_operational": 1,
00:18:02.002    "base_bdevs_list": [
00:18:02.002      {
00:18:02.002        "name": null,
00:18:02.002        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:02.002        "is_configured": false,
00:18:02.002        "data_offset": 0,
00:18:02.002        "data_size": 7936
00:18:02.002      },
00:18:02.002      {
00:18:02.002        "name": "BaseBdev2",
00:18:02.002        "uuid": "c8f14769-e4cd-4723-a6a6-a880a6f4d638",
00:18:02.002        "is_configured": true,
00:18:02.002        "data_offset": 256,
00:18:02.002        "data_size": 7936
00:18:02.002      }
00:18:02.002    ]
00:18:02.002  }'
00:18:02.002   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:02.002   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 ))
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]'
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.574  [2024-12-16 11:39:28.495781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:18:02.574  [2024-12-16 11:39:28.495896] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:02.574  [2024-12-16 11:39:28.508159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:02.574  [2024-12-16 11:39:28.508212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:02.574  [2024-12-16 11:39:28.508225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ ))
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs ))
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)'
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev=
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']'
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']'
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99146
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99146 ']'
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99146
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:18:02.574    11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99146
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:18:02.574  killing process with pid 99146
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99146'
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99146
00:18:02.574  [2024-12-16 11:39:28.604401] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:02.574   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99146
00:18:02.574  [2024-12-16 11:39:28.605426] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:02.835   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0
00:18:02.835  
00:18:02.835  real	0m4.093s
00:18:02.835  user	0m6.518s
00:18:02.835  sys	0m0.814s
00:18:02.835   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:02.835   11:39:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:02.835  ************************************
00:18:02.835  END TEST raid_state_function_test_sb_md_interleaved
00:18:02.835  ************************************
00:18:03.095   11:39:28 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2
00:18:03.095   11:39:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']'
00:18:03.095   11:39:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:03.095   11:39:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:18:03.095  ************************************
00:18:03.095  START TEST raid_superblock_test_md_interleaved
00:18:03.095  ************************************
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=()
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=()
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=()
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']'
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99389
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99389
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99389 ']'
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100
00:18:03.095  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable
00:18:03.095   11:39:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:03.095  [2024-12-16 11:39:29.011039] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:03.095  [2024-12-16 11:39:29.011169] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99389 ]
00:18:03.095  [2024-12-16 11:39:29.153183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:03.355  [2024-12-16 11:39:29.199157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:03.355  [2024-12-16 11:39:29.241613] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:03.355  [2024-12-16 11:39:29.241664] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 ))
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:03.924  malloc1
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:03.924   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:03.924  [2024-12-16 11:39:29.867511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:03.924  [2024-12-16 11:39:29.867583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:03.924  [2024-12-16 11:39:29.867606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:18:03.924  [2024-12-16 11:39:29.867618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:03.925  [2024-12-16 11:39:29.869466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:03.925  [2024-12-16 11:39:29.869504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:03.925  pt1
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt)
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:03.925  malloc2
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:03.925  [2024-12-16 11:39:29.906882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:03.925  [2024-12-16 11:39:29.906937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:03.925  [2024-12-16 11:39:29.906954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:18:03.925  [2024-12-16 11:39:29.906965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:03.925  [2024-12-16 11:39:29.908825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:03.925  [2024-12-16 11:39:29.908858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:03.925  pt2
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ ))
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs ))
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:03.925  [2024-12-16 11:39:29.918900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:03.925  [2024-12-16 11:39:29.920755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:03.925  [2024-12-16 11:39:29.920911] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:18:03.925  [2024-12-16 11:39:29.920936] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128
00:18:03.925  [2024-12-16 11:39:29.921009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0
00:18:03.925  [2024-12-16 11:39:29.921076] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:18:03.925  [2024-12-16 11:39:29.921089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:18:03.925  [2024-12-16 11:39:29.921157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:03.925    11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:03.925    11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:03.925    11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:03.925    11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:03.925    11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:03.925    "name": "raid_bdev1",
00:18:03.925    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:03.925    "strip_size_kb": 0,
00:18:03.925    "state": "online",
00:18:03.925    "raid_level": "raid1",
00:18:03.925    "superblock": true,
00:18:03.925    "num_base_bdevs": 2,
00:18:03.925    "num_base_bdevs_discovered": 2,
00:18:03.925    "num_base_bdevs_operational": 2,
00:18:03.925    "base_bdevs_list": [
00:18:03.925      {
00:18:03.925        "name": "pt1",
00:18:03.925        "uuid": "00000000-0000-0000-0000-000000000001",
00:18:03.925        "is_configured": true,
00:18:03.925        "data_offset": 256,
00:18:03.925        "data_size": 7936
00:18:03.925      },
00:18:03.925      {
00:18:03.925        "name": "pt2",
00:18:03.925        "uuid": "00000000-0000-0000-0000-000000000002",
00:18:03.925        "is_configured": true,
00:18:03.925        "data_offset": 256,
00:18:03.925        "data_size": 7936
00:18:03.925      }
00:18:03.925    ]
00:18:03.925  }'
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:03.925   11:39:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.494   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1
00:18:04.494   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:18:04.494   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:18:04.494   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:18:04.494   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name
00:18:04.494   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:18:04.494    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:18:04.494    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.494    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:18:04.494    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.494  [2024-12-16 11:39:30.374472] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:04.494    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.494   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:18:04.494    "name": "raid_bdev1",
00:18:04.494    "aliases": [
00:18:04.494      "8a2178da-77d5-42a5-9869-4aba04708422"
00:18:04.494    ],
00:18:04.494    "product_name": "Raid Volume",
00:18:04.494    "block_size": 4128,
00:18:04.494    "num_blocks": 7936,
00:18:04.494    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:04.495    "md_size": 32,
00:18:04.495    "md_interleave": true,
00:18:04.495    "dif_type": 0,
00:18:04.495    "assigned_rate_limits": {
00:18:04.495      "rw_ios_per_sec": 0,
00:18:04.495      "rw_mbytes_per_sec": 0,
00:18:04.495      "r_mbytes_per_sec": 0,
00:18:04.495      "w_mbytes_per_sec": 0
00:18:04.495    },
00:18:04.495    "claimed": false,
00:18:04.495    "zoned": false,
00:18:04.495    "supported_io_types": {
00:18:04.495      "read": true,
00:18:04.495      "write": true,
00:18:04.495      "unmap": false,
00:18:04.495      "flush": false,
00:18:04.495      "reset": true,
00:18:04.495      "nvme_admin": false,
00:18:04.495      "nvme_io": false,
00:18:04.495      "nvme_io_md": false,
00:18:04.495      "write_zeroes": true,
00:18:04.495      "zcopy": false,
00:18:04.495      "get_zone_info": false,
00:18:04.495      "zone_management": false,
00:18:04.495      "zone_append": false,
00:18:04.495      "compare": false,
00:18:04.495      "compare_and_write": false,
00:18:04.495      "abort": false,
00:18:04.495      "seek_hole": false,
00:18:04.495      "seek_data": false,
00:18:04.495      "copy": false,
00:18:04.495      "nvme_iov_md": false
00:18:04.495    },
00:18:04.495    "memory_domains": [
00:18:04.495      {
00:18:04.495        "dma_device_id": "system",
00:18:04.495        "dma_device_type": 1
00:18:04.495      },
00:18:04.495      {
00:18:04.495        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:04.495        "dma_device_type": 2
00:18:04.495      },
00:18:04.495      {
00:18:04.495        "dma_device_id": "system",
00:18:04.495        "dma_device_type": 1
00:18:04.495      },
00:18:04.495      {
00:18:04.495        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:04.495        "dma_device_type": 2
00:18:04.495      }
00:18:04.495    ],
00:18:04.495    "driver_specific": {
00:18:04.495      "raid": {
00:18:04.495        "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:04.495        "strip_size_kb": 0,
00:18:04.495        "state": "online",
00:18:04.495        "raid_level": "raid1",
00:18:04.495        "superblock": true,
00:18:04.495        "num_base_bdevs": 2,
00:18:04.495        "num_base_bdevs_discovered": 2,
00:18:04.495        "num_base_bdevs_operational": 2,
00:18:04.495        "base_bdevs_list": [
00:18:04.495          {
00:18:04.495            "name": "pt1",
00:18:04.495            "uuid": "00000000-0000-0000-0000-000000000001",
00:18:04.495            "is_configured": true,
00:18:04.495            "data_offset": 256,
00:18:04.495            "data_size": 7936
00:18:04.495          },
00:18:04.495          {
00:18:04.495            "name": "pt2",
00:18:04.495            "uuid": "00000000-0000-0000-0000-000000000002",
00:18:04.495            "is_configured": true,
00:18:04.495            "data_offset": 256,
00:18:04.495            "data_size": 7936
00:18:04.495          }
00:18:04.495        ]
00:18:04.495      }
00:18:04.495    }
00:18:04.495  }'
00:18:04.495    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:18:04.495   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:18:04.495  pt2'
00:18:04.495    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:04.495   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0'
00:18:04.495   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:18:04.495    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:18:04.495    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:04.495    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.495    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.495    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0'
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]]
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0'
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]]
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid'
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.755  [2024-12-16 11:39:30.625953] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8a2178da-77d5-42a5-9869-4aba04708422
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 8a2178da-77d5-42a5-9869-4aba04708422 ']'
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.755  [2024-12-16 11:39:30.673613] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:04.755  [2024-12-16 11:39:30.673647] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:04.755  [2024-12-16 11:39:30.673745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:04.755  [2024-12-16 11:39:30.673818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:04.755  [2024-12-16 11:39:30.673828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]'
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev=
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']'
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}"
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']'
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:18:04.755   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:04.755    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:04.756  [2024-12-16 11:39:30.809378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:18:04.756  [2024-12-16 11:39:30.811236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:18:04.756  [2024-12-16 11:39:30.811308] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1
00:18:04.756  [2024-12-16 11:39:30.811358] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2
00:18:04.756  [2024-12-16 11:39:30.811382] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:04.756  [2024-12-16 11:39:30.811391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring
00:18:04.756  request:
00:18:04.756  {
00:18:04.756  "name": "raid_bdev1",
00:18:04.756  "raid_level": "raid1",
00:18:04.756  "base_bdevs": [
00:18:04.756  "malloc1",
00:18:04.756  "malloc2"
00:18:04.756  ],
00:18:04.756  "superblock": false,
00:18:04.756  "method": "bdev_raid_create",
00:18:04.756  "req_id": 1
00:18:04.756  }
00:18:04.756  Got JSON-RPC error response
00:18:04.756  response:
00:18:04.756  {
00:18:04.756  "code": -17,
00:18:04.756  "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:18:04.756  }
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:18:04.756   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:18:05.025    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:05.025    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:05.025    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.025    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]'
00:18:05.025    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:05.025   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev=
00:18:05.025   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']'
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.026  [2024-12-16 11:39:30.877235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:05.026  [2024-12-16 11:39:30.877355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:05.026  [2024-12-16 11:39:30.877393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:18:05.026  [2024-12-16 11:39:30.877429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:05.026  [2024-12-16 11:39:30.879329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:05.026  [2024-12-16 11:39:30.879409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:05.026  [2024-12-16 11:39:30.879508] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:18:05.026  [2024-12-16 11:39:30.879600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:05.026  pt1
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:05.026    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:05.026    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:05.026    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.026    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:05.026    11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:05.026    "name": "raid_bdev1",
00:18:05.026    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:05.026    "strip_size_kb": 0,
00:18:05.026    "state": "configuring",
00:18:05.026    "raid_level": "raid1",
00:18:05.026    "superblock": true,
00:18:05.026    "num_base_bdevs": 2,
00:18:05.026    "num_base_bdevs_discovered": 1,
00:18:05.026    "num_base_bdevs_operational": 2,
00:18:05.026    "base_bdevs_list": [
00:18:05.026      {
00:18:05.026        "name": "pt1",
00:18:05.026        "uuid": "00000000-0000-0000-0000-000000000001",
00:18:05.026        "is_configured": true,
00:18:05.026        "data_offset": 256,
00:18:05.026        "data_size": 7936
00:18:05.026      },
00:18:05.026      {
00:18:05.026        "name": null,
00:18:05.026        "uuid": "00000000-0000-0000-0000-000000000002",
00:18:05.026        "is_configured": false,
00:18:05.026        "data_offset": 256,
00:18:05.026        "data_size": 7936
00:18:05.026      }
00:18:05.026    ]
00:18:05.026  }'
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:05.026   11:39:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']'
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 ))
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.300  [2024-12-16 11:39:31.352581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:05.300  [2024-12-16 11:39:31.352745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:05.300  [2024-12-16 11:39:31.352789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:18:05.300  [2024-12-16 11:39:31.352821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:05.300  [2024-12-16 11:39:31.353007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:05.300  [2024-12-16 11:39:31.353049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:05.300  [2024-12-16 11:39:31.353126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:18:05.300  [2024-12-16 11:39:31.353172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:05.300  [2024-12-16 11:39:31.353259] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980
00:18:05.300  [2024-12-16 11:39:31.353268] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128
00:18:05.300  [2024-12-16 11:39:31.353354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:18:05.300  [2024-12-16 11:39:31.353412] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980
00:18:05.300  [2024-12-16 11:39:31.353425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980
00:18:05.300  [2024-12-16 11:39:31.353483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:05.300  pt2
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ ))
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs ))
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:05.300   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:05.300    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:05.300    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:05.300    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.559    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:05.559    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:05.559   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:05.559    "name": "raid_bdev1",
00:18:05.559    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:05.559    "strip_size_kb": 0,
00:18:05.559    "state": "online",
00:18:05.559    "raid_level": "raid1",
00:18:05.559    "superblock": true,
00:18:05.559    "num_base_bdevs": 2,
00:18:05.559    "num_base_bdevs_discovered": 2,
00:18:05.559    "num_base_bdevs_operational": 2,
00:18:05.559    "base_bdevs_list": [
00:18:05.559      {
00:18:05.559        "name": "pt1",
00:18:05.559        "uuid": "00000000-0000-0000-0000-000000000001",
00:18:05.559        "is_configured": true,
00:18:05.559        "data_offset": 256,
00:18:05.559        "data_size": 7936
00:18:05.559      },
00:18:05.559      {
00:18:05.559        "name": "pt2",
00:18:05.559        "uuid": "00000000-0000-0000-0000-000000000002",
00:18:05.559        "is_configured": true,
00:18:05.559        "data_offset": 256,
00:18:05.559        "data_size": 7936
00:18:05.559      }
00:18:05.559    ]
00:18:05.559  }'
00:18:05.559   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:05.559   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.818   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1
00:18:05.818   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1
00:18:05.818   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info
00:18:05.818   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names
00:18:05.818   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name
00:18:05.818   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev
00:18:05.818    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:18:05.818    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:05.818    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]'
00:18:05.818    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:05.818  [2024-12-16 11:39:31.804102] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:05.818    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:05.818   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{
00:18:05.818    "name": "raid_bdev1",
00:18:05.818    "aliases": [
00:18:05.818      "8a2178da-77d5-42a5-9869-4aba04708422"
00:18:05.818    ],
00:18:05.818    "product_name": "Raid Volume",
00:18:05.818    "block_size": 4128,
00:18:05.818    "num_blocks": 7936,
00:18:05.818    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:05.818    "md_size": 32,
00:18:05.818    "md_interleave": true,
00:18:05.818    "dif_type": 0,
00:18:05.818    "assigned_rate_limits": {
00:18:05.818      "rw_ios_per_sec": 0,
00:18:05.818      "rw_mbytes_per_sec": 0,
00:18:05.818      "r_mbytes_per_sec": 0,
00:18:05.818      "w_mbytes_per_sec": 0
00:18:05.818    },
00:18:05.818    "claimed": false,
00:18:05.818    "zoned": false,
00:18:05.818    "supported_io_types": {
00:18:05.818      "read": true,
00:18:05.818      "write": true,
00:18:05.818      "unmap": false,
00:18:05.818      "flush": false,
00:18:05.818      "reset": true,
00:18:05.818      "nvme_admin": false,
00:18:05.818      "nvme_io": false,
00:18:05.818      "nvme_io_md": false,
00:18:05.818      "write_zeroes": true,
00:18:05.818      "zcopy": false,
00:18:05.818      "get_zone_info": false,
00:18:05.818      "zone_management": false,
00:18:05.818      "zone_append": false,
00:18:05.818      "compare": false,
00:18:05.818      "compare_and_write": false,
00:18:05.818      "abort": false,
00:18:05.818      "seek_hole": false,
00:18:05.818      "seek_data": false,
00:18:05.818      "copy": false,
00:18:05.818      "nvme_iov_md": false
00:18:05.818    },
00:18:05.818    "memory_domains": [
00:18:05.818      {
00:18:05.818        "dma_device_id": "system",
00:18:05.818        "dma_device_type": 1
00:18:05.818      },
00:18:05.818      {
00:18:05.818        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:05.818        "dma_device_type": 2
00:18:05.818      },
00:18:05.818      {
00:18:05.818        "dma_device_id": "system",
00:18:05.818        "dma_device_type": 1
00:18:05.818      },
00:18:05.818      {
00:18:05.818        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:05.818        "dma_device_type": 2
00:18:05.819      }
00:18:05.819    ],
00:18:05.819    "driver_specific": {
00:18:05.819      "raid": {
00:18:05.819        "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:05.819        "strip_size_kb": 0,
00:18:05.819        "state": "online",
00:18:05.819        "raid_level": "raid1",
00:18:05.819        "superblock": true,
00:18:05.819        "num_base_bdevs": 2,
00:18:05.819        "num_base_bdevs_discovered": 2,
00:18:05.819        "num_base_bdevs_operational": 2,
00:18:05.819        "base_bdevs_list": [
00:18:05.819          {
00:18:05.819            "name": "pt1",
00:18:05.819            "uuid": "00000000-0000-0000-0000-000000000001",
00:18:05.819            "is_configured": true,
00:18:05.819            "data_offset": 256,
00:18:05.819            "data_size": 7936
00:18:05.819          },
00:18:05.819          {
00:18:05.819            "name": "pt2",
00:18:05.819            "uuid": "00000000-0000-0000-0000-000000000002",
00:18:05.819            "is_configured": true,
00:18:05.819            "data_offset": 256,
00:18:05.819            "data_size": 7936
00:18:05.819          }
00:18:05.819        ]
00:18:05.819      }
00:18:05.819    }
00:18:05.819  }'
00:18:05.819    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name'
00:18:06.079   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1
00:18:06.079  pt2'
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:06.079   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0'
00:18:06.079   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.079   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0'
00:18:06.079   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]]
00:18:06.079   11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")'
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.079    11:39:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0'
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]]
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid'
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.079  [2024-12-16 11:39:32.055654] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 8a2178da-77d5-42a5-9869-4aba04708422 '!=' 8a2178da-77d5-42a5-9869-4aba04708422 ']'
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.079  [2024-12-16 11:39:32.083344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.079    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:06.079    "name": "raid_bdev1",
00:18:06.079    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:06.079    "strip_size_kb": 0,
00:18:06.079    "state": "online",
00:18:06.079    "raid_level": "raid1",
00:18:06.079    "superblock": true,
00:18:06.079    "num_base_bdevs": 2,
00:18:06.079    "num_base_bdevs_discovered": 1,
00:18:06.079    "num_base_bdevs_operational": 1,
00:18:06.079    "base_bdevs_list": [
00:18:06.079      {
00:18:06.079        "name": null,
00:18:06.079        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:06.079        "is_configured": false,
00:18:06.079        "data_offset": 0,
00:18:06.079        "data_size": 7936
00:18:06.079      },
00:18:06.079      {
00:18:06.079        "name": "pt2",
00:18:06.079        "uuid": "00000000-0000-0000-0000-000000000002",
00:18:06.079        "is_configured": true,
00:18:06.079        "data_offset": 256,
00:18:06.079        "data_size": 7936
00:18:06.079      }
00:18:06.079    ]
00:18:06.079  }'
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:06.079   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.648   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:18:06.648   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.649  [2024-12-16 11:39:32.566467] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:06.649  [2024-12-16 11:39:32.566594] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:06.649  [2024-12-16 11:39:32.566720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:06.649  [2024-12-16 11:39:32.566798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:06.649  [2024-12-16 11:39:32.566843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]'
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev=
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']'
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 ))
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ ))
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs ))
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 ))
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 ))
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.649  [2024-12-16 11:39:32.638321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:06.649  [2024-12-16 11:39:32.638377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:06.649  [2024-12-16 11:39:32.638396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:18:06.649  [2024-12-16 11:39:32.638404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:06.649  [2024-12-16 11:39:32.640362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:06.649  [2024-12-16 11:39:32.640398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:06.649  [2024-12-16 11:39:32.640449] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2
00:18:06.649  [2024-12-16 11:39:32.640479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:06.649  [2024-12-16 11:39:32.640551] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00
00:18:06.649  [2024-12-16 11:39:32.640560] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128
00:18:06.649  [2024-12-16 11:39:32.640645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:18:06.649  [2024-12-16 11:39:32.640716] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00
00:18:06.649  [2024-12-16 11:39:32.640725] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00
00:18:06.649  [2024-12-16 11:39:32.640779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:06.649  pt2
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:06.649    11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:06.649    "name": "raid_bdev1",
00:18:06.649    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:06.649    "strip_size_kb": 0,
00:18:06.649    "state": "online",
00:18:06.649    "raid_level": "raid1",
00:18:06.649    "superblock": true,
00:18:06.649    "num_base_bdevs": 2,
00:18:06.649    "num_base_bdevs_discovered": 1,
00:18:06.649    "num_base_bdevs_operational": 1,
00:18:06.649    "base_bdevs_list": [
00:18:06.649      {
00:18:06.649        "name": null,
00:18:06.649        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:06.649        "is_configured": false,
00:18:06.649        "data_offset": 256,
00:18:06.649        "data_size": 7936
00:18:06.649      },
00:18:06.649      {
00:18:06.649        "name": "pt2",
00:18:06.649        "uuid": "00000000-0000-0000-0000-000000000002",
00:18:06.649        "is_configured": true,
00:18:06.649        "data_offset": 256,
00:18:06.649        "data_size": 7936
00:18:06.649      }
00:18:06.649    ]
00:18:06.649  }'
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:06.649   11:39:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.217  [2024-12-16 11:39:33.113583] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:07.217  [2024-12-16 11:39:33.113686] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:07.217  [2024-12-16 11:39:33.113783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:07.217  [2024-12-16 11:39:33.113850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:07.217  [2024-12-16 11:39:33.113918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:07.217    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]'
00:18:07.217    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:07.217    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:07.217    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.217    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev=
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']'
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']'
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:07.217   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.217  [2024-12-16 11:39:33.169439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:07.217  [2024-12-16 11:39:33.169564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:07.217  [2024-12-16 11:39:33.169601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:18:07.217  [2024-12-16 11:39:33.169653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:07.217  [2024-12-16 11:39:33.171604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:07.218  [2024-12-16 11:39:33.171678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:07.218  [2024-12-16 11:39:33.171768] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1
00:18:07.218  [2024-12-16 11:39:33.171826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:07.218  [2024-12-16 11:39:33.171958] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2)
00:18:07.218  [2024-12-16 11:39:33.172015] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:07.218  [2024-12-16 11:39:33.172055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring
00:18:07.218  [2024-12-16 11:39:33.172119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:07.218  [2024-12-16 11:39:33.172219] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400
00:18:07.218  [2024-12-16 11:39:33.172259] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128
00:18:07.218  [2024-12-16 11:39:33.172343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:18:07.218  [2024-12-16 11:39:33.172435] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400
00:18:07.218  [2024-12-16 11:39:33.172472] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400
00:18:07.218  [2024-12-16 11:39:33.172589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:07.218  pt1
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']'
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:07.218    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:07.218    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:07.218    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:07.218    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.218    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:07.218    "name": "raid_bdev1",
00:18:07.218    "uuid": "8a2178da-77d5-42a5-9869-4aba04708422",
00:18:07.218    "strip_size_kb": 0,
00:18:07.218    "state": "online",
00:18:07.218    "raid_level": "raid1",
00:18:07.218    "superblock": true,
00:18:07.218    "num_base_bdevs": 2,
00:18:07.218    "num_base_bdevs_discovered": 1,
00:18:07.218    "num_base_bdevs_operational": 1,
00:18:07.218    "base_bdevs_list": [
00:18:07.218      {
00:18:07.218        "name": null,
00:18:07.218        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:07.218        "is_configured": false,
00:18:07.218        "data_offset": 256,
00:18:07.218        "data_size": 7936
00:18:07.218      },
00:18:07.218      {
00:18:07.218        "name": "pt2",
00:18:07.218        "uuid": "00000000-0000-0000-0000-000000000002",
00:18:07.218        "is_configured": true,
00:18:07.218        "data_offset": 256,
00:18:07.218        "data_size": 7936
00:18:07.218      }
00:18:07.218    ]
00:18:07.218  }'
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:07.218   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured'
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:07.786   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]]
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:07.786    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:07.787    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid'
00:18:07.787  [2024-12-16 11:39:33.672826] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:07.787    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 8a2178da-77d5-42a5-9869-4aba04708422 '!=' 8a2178da-77d5-42a5-9869-4aba04708422 ']'
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99389
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99389 ']'
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99389
00:18:07.787    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:18:07.787    11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99389
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99389'
00:18:07.787  killing process with pid 99389
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99389
00:18:07.787  [2024-12-16 11:39:33.736701] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:07.787  [2024-12-16 11:39:33.736783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:07.787  [2024-12-16 11:39:33.736833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:07.787  [2024-12-16 11:39:33.736842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline
00:18:07.787   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99389
00:18:07.787  [2024-12-16 11:39:33.759808] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:08.047  ************************************
00:18:08.047  END TEST raid_superblock_test_md_interleaved
00:18:08.047  ************************************
00:18:08.047   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0
00:18:08.047  
00:18:08.047  real	0m5.059s
00:18:08.047  user	0m8.300s
00:18:08.047  sys	0m1.074s
00:18:08.047   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:08.047   11:39:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:08.047   11:39:34 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false
00:18:08.047   11:39:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:18:08.047   11:39:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:08.047   11:39:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:18:08.047  ************************************
00:18:08.047  START TEST raid_rebuild_test_sb_md_interleaved
00:18:08.047  ************************************
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 ))
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ ))
00:18:08.047    11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs ))
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']'
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']'
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s'
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99706
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99706
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:18:08.047   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99706 ']'
00:18:08.048   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:08.048   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100
00:18:08.048   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:08.048  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:08.048   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable
00:18:08.048   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:08.308  [2024-12-16 11:39:34.155648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:08.308  [2024-12-16 11:39:34.155937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536).
00:18:08.308  Zero copy mechanism will not be used.
00:18:08.308  :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99706 ]
00:18:08.308  [2024-12-16 11:39:34.318325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:08.308  [2024-12-16 11:39:34.367111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:08.567  [2024-12-16 11:39:34.409768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:08.567  [2024-12-16 11:39:34.409872] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  BaseBdev1_malloc
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  [2024-12-16 11:39:35.004159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:18:09.138  [2024-12-16 11:39:35.004301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:09.138  [2024-12-16 11:39:35.004358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:18:09.138  [2024-12-16 11:39:35.004376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:09.138  [2024-12-16 11:39:35.006313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:09.138  [2024-12-16 11:39:35.006348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:18:09.138  BaseBdev1
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}"
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  BaseBdev2_malloc
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  [2024-12-16 11:39:35.046104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:18:09.138  [2024-12-16 11:39:35.046165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:09.138  [2024-12-16 11:39:35.046202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:18:09.138  [2024-12-16 11:39:35.046211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:09.138  [2024-12-16 11:39:35.048194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:09.138  [2024-12-16 11:39:35.048232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:18:09.138  BaseBdev2
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  spare_malloc
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  spare_delay
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  [2024-12-16 11:39:35.086692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:18:09.138  [2024-12-16 11:39:35.086758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:09.138  [2024-12-16 11:39:35.086785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:18:09.138  [2024-12-16 11:39:35.086795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:09.138  [2024-12-16 11:39:35.088841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:09.138  [2024-12-16 11:39:35.088878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:18:09.138  spare
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.138  [2024-12-16 11:39:35.098718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:09.138  [2024-12-16 11:39:35.100731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:09.138  [2024-12-16 11:39:35.100895] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280
00:18:09.138  [2024-12-16 11:39:35.100909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128
00:18:09.138  [2024-12-16 11:39:35.101014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70
00:18:09.138  [2024-12-16 11:39:35.101072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280
00:18:09.138  [2024-12-16 11:39:35.101082] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280
00:18:09.138  [2024-12-16 11:39:35.101147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:09.138   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:09.138    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:09.139    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.139    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.139    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:09.139    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.139   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:09.139    "name": "raid_bdev1",
00:18:09.139    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:09.139    "strip_size_kb": 0,
00:18:09.139    "state": "online",
00:18:09.139    "raid_level": "raid1",
00:18:09.139    "superblock": true,
00:18:09.139    "num_base_bdevs": 2,
00:18:09.139    "num_base_bdevs_discovered": 2,
00:18:09.139    "num_base_bdevs_operational": 2,
00:18:09.139    "base_bdevs_list": [
00:18:09.139      {
00:18:09.139        "name": "BaseBdev1",
00:18:09.139        "uuid": "6df73efc-4b1b-5418-8f1a-3eeaa947c591",
00:18:09.139        "is_configured": true,
00:18:09.139        "data_offset": 256,
00:18:09.139        "data_size": 7936
00:18:09.139      },
00:18:09.139      {
00:18:09.139        "name": "BaseBdev2",
00:18:09.139        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:09.139        "is_configured": true,
00:18:09.139        "data_offset": 256,
00:18:09.139        "data_size": 7936
00:18:09.139      }
00:18:09.139    ]
00:18:09.139  }'
00:18:09.139   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:09.139   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.398    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1
00:18:09.398    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.398    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks'
00:18:09.658  [2024-12-16 11:39:35.470382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']'
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']'
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.658  [2024-12-16 11:39:35.569904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:09.658    11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:09.658    "name": "raid_bdev1",
00:18:09.658    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:09.658    "strip_size_kb": 0,
00:18:09.658    "state": "online",
00:18:09.658    "raid_level": "raid1",
00:18:09.658    "superblock": true,
00:18:09.658    "num_base_bdevs": 2,
00:18:09.658    "num_base_bdevs_discovered": 1,
00:18:09.658    "num_base_bdevs_operational": 1,
00:18:09.658    "base_bdevs_list": [
00:18:09.658      {
00:18:09.658        "name": null,
00:18:09.658        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:09.658        "is_configured": false,
00:18:09.658        "data_offset": 0,
00:18:09.658        "data_size": 7936
00:18:09.658      },
00:18:09.658      {
00:18:09.658        "name": "BaseBdev2",
00:18:09.658        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:09.658        "is_configured": true,
00:18:09.658        "data_offset": 256,
00:18:09.658        "data_size": 7936
00:18:09.658      }
00:18:09.658    ]
00:18:09.658  }'
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:09.658   11:39:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:10.227   11:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:18:10.227   11:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:10.227   11:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:10.227  [2024-12-16 11:39:36.021211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:10.227  [2024-12-16 11:39:36.024284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40
00:18:10.227  [2024-12-16 11:39:36.026213] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:18:10.227   11:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:10.227   11:39:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:11.165    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:11.165    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:11.165    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:11.165    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:11.165    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:11.165    "name": "raid_bdev1",
00:18:11.165    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:11.165    "strip_size_kb": 0,
00:18:11.165    "state": "online",
00:18:11.165    "raid_level": "raid1",
00:18:11.165    "superblock": true,
00:18:11.165    "num_base_bdevs": 2,
00:18:11.165    "num_base_bdevs_discovered": 2,
00:18:11.165    "num_base_bdevs_operational": 2,
00:18:11.165    "process": {
00:18:11.165      "type": "rebuild",
00:18:11.165      "target": "spare",
00:18:11.165      "progress": {
00:18:11.165        "blocks": 2560,
00:18:11.165        "percent": 32
00:18:11.165      }
00:18:11.165    },
00:18:11.165    "base_bdevs_list": [
00:18:11.165      {
00:18:11.165        "name": "spare",
00:18:11.165        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:11.165        "is_configured": true,
00:18:11.165        "data_offset": 256,
00:18:11.165        "data_size": 7936
00:18:11.165      },
00:18:11.165      {
00:18:11.165        "name": "BaseBdev2",
00:18:11.165        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:11.165        "is_configured": true,
00:18:11.165        "data_offset": 256,
00:18:11.165        "data_size": 7936
00:18:11.165      }
00:18:11.165    ]
00:18:11.165  }'
00:18:11.165    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:11.165    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:11.165   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:11.165  [2024-12-16 11:39:37.164773] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:11.424  [2024-12-16 11:39:37.231394] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:18:11.424  [2024-12-16 11:39:37.231521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:11.424  [2024-12-16 11:39:37.231596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:11.424  [2024-12-16 11:39:37.231621] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:11.424    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:11.424    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:11.424    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:11.424    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:11.424    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:11.424    "name": "raid_bdev1",
00:18:11.424    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:11.424    "strip_size_kb": 0,
00:18:11.424    "state": "online",
00:18:11.424    "raid_level": "raid1",
00:18:11.424    "superblock": true,
00:18:11.424    "num_base_bdevs": 2,
00:18:11.424    "num_base_bdevs_discovered": 1,
00:18:11.424    "num_base_bdevs_operational": 1,
00:18:11.424    "base_bdevs_list": [
00:18:11.424      {
00:18:11.424        "name": null,
00:18:11.424        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:11.424        "is_configured": false,
00:18:11.424        "data_offset": 0,
00:18:11.424        "data_size": 7936
00:18:11.424      },
00:18:11.424      {
00:18:11.424        "name": "BaseBdev2",
00:18:11.424        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:11.424        "is_configured": true,
00:18:11.424        "data_offset": 256,
00:18:11.424        "data_size": 7936
00:18:11.424      }
00:18:11.424    ]
00:18:11.424  }'
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:11.424   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:11.684    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:11.684    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:11.684    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:11.684    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:11.684    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:11.684    "name": "raid_bdev1",
00:18:11.684    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:11.684    "strip_size_kb": 0,
00:18:11.684    "state": "online",
00:18:11.684    "raid_level": "raid1",
00:18:11.684    "superblock": true,
00:18:11.684    "num_base_bdevs": 2,
00:18:11.684    "num_base_bdevs_discovered": 1,
00:18:11.684    "num_base_bdevs_operational": 1,
00:18:11.684    "base_bdevs_list": [
00:18:11.684      {
00:18:11.684        "name": null,
00:18:11.684        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:11.684        "is_configured": false,
00:18:11.684        "data_offset": 0,
00:18:11.684        "data_size": 7936
00:18:11.684      },
00:18:11.684      {
00:18:11.684        "name": "BaseBdev2",
00:18:11.684        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:11.684        "is_configured": true,
00:18:11.684        "data_offset": 256,
00:18:11.684        "data_size": 7936
00:18:11.684      }
00:18:11.684    ]
00:18:11.684  }'
00:18:11.684    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:18:11.684    11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:11.684   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:11.944  [2024-12-16 11:39:37.750670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:11.944  [2024-12-16 11:39:37.753684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10
00:18:11.944  [2024-12-16 11:39:37.755602] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:18:11.944   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:11.944   11:39:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1
00:18:12.885   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:12.885   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:12.885   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:18:12.885   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare
00:18:12.885   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:12.885    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:12.885    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:12.885    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:12.885    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:12.885    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:12.885   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:12.885    "name": "raid_bdev1",
00:18:12.885    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:12.885    "strip_size_kb": 0,
00:18:12.885    "state": "online",
00:18:12.885    "raid_level": "raid1",
00:18:12.885    "superblock": true,
00:18:12.885    "num_base_bdevs": 2,
00:18:12.885    "num_base_bdevs_discovered": 2,
00:18:12.885    "num_base_bdevs_operational": 2,
00:18:12.885    "process": {
00:18:12.885      "type": "rebuild",
00:18:12.885      "target": "spare",
00:18:12.885      "progress": {
00:18:12.885        "blocks": 2560,
00:18:12.885        "percent": 32
00:18:12.885      }
00:18:12.885    },
00:18:12.885    "base_bdevs_list": [
00:18:12.885      {
00:18:12.885        "name": "spare",
00:18:12.885        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:12.885        "is_configured": true,
00:18:12.885        "data_offset": 256,
00:18:12.885        "data_size": 7936
00:18:12.885      },
00:18:12.885      {
00:18:12.885        "name": "BaseBdev2",
00:18:12.885        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:12.885        "is_configured": true,
00:18:12.885        "data_offset": 256,
00:18:12.885        "data_size": 7936
00:18:12.885      }
00:18:12.885    ]
00:18:12.886  }'
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']'
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']'
00:18:12.886  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']'
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']'
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=631
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:12.886    "name": "raid_bdev1",
00:18:12.886    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:12.886    "strip_size_kb": 0,
00:18:12.886    "state": "online",
00:18:12.886    "raid_level": "raid1",
00:18:12.886    "superblock": true,
00:18:12.886    "num_base_bdevs": 2,
00:18:12.886    "num_base_bdevs_discovered": 2,
00:18:12.886    "num_base_bdevs_operational": 2,
00:18:12.886    "process": {
00:18:12.886      "type": "rebuild",
00:18:12.886      "target": "spare",
00:18:12.886      "progress": {
00:18:12.886        "blocks": 2816,
00:18:12.886        "percent": 35
00:18:12.886      }
00:18:12.886    },
00:18:12.886    "base_bdevs_list": [
00:18:12.886      {
00:18:12.886        "name": "spare",
00:18:12.886        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:12.886        "is_configured": true,
00:18:12.886        "data_offset": 256,
00:18:12.886        "data_size": 7936
00:18:12.886      },
00:18:12.886      {
00:18:12.886        "name": "BaseBdev2",
00:18:12.886        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:12.886        "is_configured": true,
00:18:12.886        "data_offset": 256,
00:18:12.886        "data_size": 7936
00:18:12.886      }
00:18:12.886    ]
00:18:12.886  }'
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:12.886   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:12.886    11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:13.145   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:18:13.145   11:39:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1
00:18:14.082   11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:18:14.083   11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:14.083   11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:14.083   11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:18:14.083   11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare
00:18:14.083   11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:14.083    11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:14.083    11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:14.083    11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:14.083    11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:14.083    11:39:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:14.083   11:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:14.083    "name": "raid_bdev1",
00:18:14.083    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:14.083    "strip_size_kb": 0,
00:18:14.083    "state": "online",
00:18:14.083    "raid_level": "raid1",
00:18:14.083    "superblock": true,
00:18:14.083    "num_base_bdevs": 2,
00:18:14.083    "num_base_bdevs_discovered": 2,
00:18:14.083    "num_base_bdevs_operational": 2,
00:18:14.083    "process": {
00:18:14.083      "type": "rebuild",
00:18:14.083      "target": "spare",
00:18:14.083      "progress": {
00:18:14.083        "blocks": 5632,
00:18:14.083        "percent": 70
00:18:14.083      }
00:18:14.083    },
00:18:14.083    "base_bdevs_list": [
00:18:14.083      {
00:18:14.083        "name": "spare",
00:18:14.083        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:14.083        "is_configured": true,
00:18:14.083        "data_offset": 256,
00:18:14.083        "data_size": 7936
00:18:14.083      },
00:18:14.083      {
00:18:14.083        "name": "BaseBdev2",
00:18:14.083        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:14.083        "is_configured": true,
00:18:14.083        "data_offset": 256,
00:18:14.083        "data_size": 7936
00:18:14.083      }
00:18:14.083    ]
00:18:14.083  }'
00:18:14.083    11:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:14.083   11:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:14.083    11:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:14.083   11:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:18:14.083   11:39:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1
00:18:15.019  [2024-12-16 11:39:40.867490] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:18:15.019  [2024-12-16 11:39:40.867591] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:18:15.019  [2024-12-16 11:39:40.867701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout ))
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:15.278    "name": "raid_bdev1",
00:18:15.278    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:15.278    "strip_size_kb": 0,
00:18:15.278    "state": "online",
00:18:15.278    "raid_level": "raid1",
00:18:15.278    "superblock": true,
00:18:15.278    "num_base_bdevs": 2,
00:18:15.278    "num_base_bdevs_discovered": 2,
00:18:15.278    "num_base_bdevs_operational": 2,
00:18:15.278    "base_bdevs_list": [
00:18:15.278      {
00:18:15.278        "name": "spare",
00:18:15.278        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:15.278        "is_configured": true,
00:18:15.278        "data_offset": 256,
00:18:15.278        "data_size": 7936
00:18:15.278      },
00:18:15.278      {
00:18:15.278        "name": "BaseBdev2",
00:18:15.278        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:15.278        "is_configured": true,
00:18:15.278        "data_offset": 256,
00:18:15.278        "data_size": 7936
00:18:15.278      }
00:18:15.278    ]
00:18:15.278  }'
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]]
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]]
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:15.278    "name": "raid_bdev1",
00:18:15.278    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:15.278    "strip_size_kb": 0,
00:18:15.278    "state": "online",
00:18:15.278    "raid_level": "raid1",
00:18:15.278    "superblock": true,
00:18:15.278    "num_base_bdevs": 2,
00:18:15.278    "num_base_bdevs_discovered": 2,
00:18:15.278    "num_base_bdevs_operational": 2,
00:18:15.278    "base_bdevs_list": [
00:18:15.278      {
00:18:15.278        "name": "spare",
00:18:15.278        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:15.278        "is_configured": true,
00:18:15.278        "data_offset": 256,
00:18:15.278        "data_size": 7936
00:18:15.278      },
00:18:15.278      {
00:18:15.278        "name": "BaseBdev2",
00:18:15.278        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:15.278        "is_configured": true,
00:18:15.278        "data_offset": 256,
00:18:15.278        "data_size": 7936
00:18:15.278      }
00:18:15.278    ]
00:18:15.278  }'
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:15.278   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:18:15.278    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:15.537    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:15.537    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.537    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:15.537    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:15.537    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:15.537    "name": "raid_bdev1",
00:18:15.537    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:15.537    "strip_size_kb": 0,
00:18:15.537    "state": "online",
00:18:15.537    "raid_level": "raid1",
00:18:15.537    "superblock": true,
00:18:15.537    "num_base_bdevs": 2,
00:18:15.537    "num_base_bdevs_discovered": 2,
00:18:15.537    "num_base_bdevs_operational": 2,
00:18:15.537    "base_bdevs_list": [
00:18:15.537      {
00:18:15.537        "name": "spare",
00:18:15.537        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:15.537        "is_configured": true,
00:18:15.537        "data_offset": 256,
00:18:15.537        "data_size": 7936
00:18:15.537      },
00:18:15.537      {
00:18:15.537        "name": "BaseBdev2",
00:18:15.537        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:15.537        "is_configured": true,
00:18:15.537        "data_offset": 256,
00:18:15.537        "data_size": 7936
00:18:15.537      }
00:18:15.537    ]
00:18:15.537  }'
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:15.537   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:15.795  [2024-12-16 11:39:41.805875] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:15.795  [2024-12-16 11:39:41.805905] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:15.795  [2024-12-16 11:39:41.805992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:15.795  [2024-12-16 11:39:41.806067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:15.795  [2024-12-16 11:39:41.806080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.795    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:15.795    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.795    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:15.795    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length
00:18:15.795    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]]
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']'
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']'
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.795   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.055  [2024-12-16 11:39:41.865740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:18:16.055  [2024-12-16 11:39:41.865802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:16.055  [2024-12-16 11:39:41.865821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:18:16.055  [2024-12-16 11:39:41.865832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:16.055  [2024-12-16 11:39:41.867943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:16.055  [2024-12-16 11:39:41.867986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:18:16.055  [2024-12-16 11:39:41.868045] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:18:16.055  [2024-12-16 11:39:41.868086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:16.055  [2024-12-16 11:39:41.868185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:16.055  spare
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.055  [2024-12-16 11:39:41.968096] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600
00:18:16.055  [2024-12-16 11:39:41.968146] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128
00:18:16.055  [2024-12-16 11:39:41.968275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0
00:18:16.055  [2024-12-16 11:39:41.968387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600
00:18:16.055  [2024-12-16 11:39:41.968399] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600
00:18:16.055  [2024-12-16 11:39:41.968501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:16.055   11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:16.055    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:16.055    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:16.055    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:16.055    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.055    11:39:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.055   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:16.055    "name": "raid_bdev1",
00:18:16.055    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:16.055    "strip_size_kb": 0,
00:18:16.055    "state": "online",
00:18:16.055    "raid_level": "raid1",
00:18:16.055    "superblock": true,
00:18:16.055    "num_base_bdevs": 2,
00:18:16.055    "num_base_bdevs_discovered": 2,
00:18:16.055    "num_base_bdevs_operational": 2,
00:18:16.055    "base_bdevs_list": [
00:18:16.055      {
00:18:16.055        "name": "spare",
00:18:16.055        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:16.055        "is_configured": true,
00:18:16.055        "data_offset": 256,
00:18:16.055        "data_size": 7936
00:18:16.055      },
00:18:16.055      {
00:18:16.055        "name": "BaseBdev2",
00:18:16.055        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:16.055        "is_configured": true,
00:18:16.055        "data_offset": 256,
00:18:16.055        "data_size": 7936
00:18:16.055      }
00:18:16.055    ]
00:18:16.055  }'
00:18:16.055   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:16.055   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.315   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:16.315   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:16.315   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:18:16.315   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none
00:18:16.315   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:16.315    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:16.315    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:16.315    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.315    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:16.315    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:16.576    "name": "raid_bdev1",
00:18:16.576    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:16.576    "strip_size_kb": 0,
00:18:16.576    "state": "online",
00:18:16.576    "raid_level": "raid1",
00:18:16.576    "superblock": true,
00:18:16.576    "num_base_bdevs": 2,
00:18:16.576    "num_base_bdevs_discovered": 2,
00:18:16.576    "num_base_bdevs_operational": 2,
00:18:16.576    "base_bdevs_list": [
00:18:16.576      {
00:18:16.576        "name": "spare",
00:18:16.576        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:16.576        "is_configured": true,
00:18:16.576        "data_offset": 256,
00:18:16.576        "data_size": 7936
00:18:16.576      },
00:18:16.576      {
00:18:16.576        "name": "BaseBdev2",
00:18:16.576        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:16.576        "is_configured": true,
00:18:16.576        "data_offset": 256,
00:18:16.576        "data_size": 7936
00:18:16.576      }
00:18:16.576    ]
00:18:16.576  }'
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name'
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]]
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.576  [2024-12-16 11:39:42.556673] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:16.576    11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:16.576   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:16.576    "name": "raid_bdev1",
00:18:16.576    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:16.576    "strip_size_kb": 0,
00:18:16.576    "state": "online",
00:18:16.576    "raid_level": "raid1",
00:18:16.576    "superblock": true,
00:18:16.576    "num_base_bdevs": 2,
00:18:16.577    "num_base_bdevs_discovered": 1,
00:18:16.577    "num_base_bdevs_operational": 1,
00:18:16.577    "base_bdevs_list": [
00:18:16.577      {
00:18:16.577        "name": null,
00:18:16.577        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:16.577        "is_configured": false,
00:18:16.577        "data_offset": 0,
00:18:16.577        "data_size": 7936
00:18:16.577      },
00:18:16.577      {
00:18:16.577        "name": "BaseBdev2",
00:18:16.577        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:16.577        "is_configured": true,
00:18:16.577        "data_offset": 256,
00:18:16.577        "data_size": 7936
00:18:16.577      }
00:18:16.577    ]
00:18:16.577  }'
00:18:16.577   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:16.577   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:17.145   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare
00:18:17.146   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:17.146   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:17.146  [2024-12-16 11:39:42.991948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:17.146  [2024-12-16 11:39:42.992221] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:18:17.146  [2024-12-16 11:39:42.992291] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:18:17.146  [2024-12-16 11:39:42.992359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:17.146  [2024-12-16 11:39:42.995279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0
00:18:17.146  [2024-12-16 11:39:42.997400] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:18:17.146   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:17.146   11:39:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1
00:18:18.084   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:18.084   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:18.084   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:18:18.084   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare
00:18:18.084   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:18.084    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:18.084    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:18.084    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.084    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:18.084    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.084   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:18.084    "name": "raid_bdev1",
00:18:18.084    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:18.084    "strip_size_kb": 0,
00:18:18.084    "state": "online",
00:18:18.084    "raid_level": "raid1",
00:18:18.084    "superblock": true,
00:18:18.084    "num_base_bdevs": 2,
00:18:18.084    "num_base_bdevs_discovered": 2,
00:18:18.084    "num_base_bdevs_operational": 2,
00:18:18.084    "process": {
00:18:18.084      "type": "rebuild",
00:18:18.084      "target": "spare",
00:18:18.084      "progress": {
00:18:18.084        "blocks": 2560,
00:18:18.084        "percent": 32
00:18:18.084      }
00:18:18.084    },
00:18:18.084    "base_bdevs_list": [
00:18:18.084      {
00:18:18.084        "name": "spare",
00:18:18.084        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:18.084        "is_configured": true,
00:18:18.084        "data_offset": 256,
00:18:18.084        "data_size": 7936
00:18:18.084      },
00:18:18.084      {
00:18:18.084        "name": "BaseBdev2",
00:18:18.085        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:18.085        "is_configured": true,
00:18:18.085        "data_offset": 256,
00:18:18.085        "data_size": 7936
00:18:18.085      }
00:18:18.085    ]
00:18:18.085  }'
00:18:18.085    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:18.085   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:18.085    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:18.344  [2024-12-16 11:39:44.168073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:18.344  [2024-12-16 11:39:44.201958] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:18:18.344  [2024-12-16 11:39:44.202017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:18.344  [2024-12-16 11:39:44.202034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:18.344  [2024-12-16 11:39:44.202041] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:18.344   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:18.345   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:18.345    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:18.345    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:18.345    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.345    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:18.345    11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.345   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:18.345    "name": "raid_bdev1",
00:18:18.345    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:18.345    "strip_size_kb": 0,
00:18:18.345    "state": "online",
00:18:18.345    "raid_level": "raid1",
00:18:18.345    "superblock": true,
00:18:18.345    "num_base_bdevs": 2,
00:18:18.345    "num_base_bdevs_discovered": 1,
00:18:18.345    "num_base_bdevs_operational": 1,
00:18:18.345    "base_bdevs_list": [
00:18:18.345      {
00:18:18.345        "name": null,
00:18:18.345        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:18.345        "is_configured": false,
00:18:18.345        "data_offset": 0,
00:18:18.345        "data_size": 7936
00:18:18.345      },
00:18:18.345      {
00:18:18.345        "name": "BaseBdev2",
00:18:18.345        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:18.345        "is_configured": true,
00:18:18.345        "data_offset": 256,
00:18:18.345        "data_size": 7936
00:18:18.345      }
00:18:18.345    ]
00:18:18.345  }'
00:18:18.345   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:18.345   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:18.604   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare
00:18:18.605   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:18.605   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:18.605  [2024-12-16 11:39:44.617138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:18:18.605  [2024-12-16 11:39:44.617259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:18.605  [2024-12-16 11:39:44.617294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:18:18.605  [2024-12-16 11:39:44.617303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:18.605  [2024-12-16 11:39:44.617514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:18.605  [2024-12-16 11:39:44.617529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:18:18.605  [2024-12-16 11:39:44.617601] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare
00:18:18.605  [2024-12-16 11:39:44.617615] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5)
00:18:18.605  [2024-12-16 11:39:44.617626] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1.
00:18:18.605  [2024-12-16 11:39:44.617646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:18.605  [2024-12-16 11:39:44.620541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080
00:18:18.605  [2024-12-16 11:39:44.622438] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:18:18.605  spare
00:18:18.605   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:18.605   11:39:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:19.985    "name": "raid_bdev1",
00:18:19.985    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:19.985    "strip_size_kb": 0,
00:18:19.985    "state": "online",
00:18:19.985    "raid_level": "raid1",
00:18:19.985    "superblock": true,
00:18:19.985    "num_base_bdevs": 2,
00:18:19.985    "num_base_bdevs_discovered": 2,
00:18:19.985    "num_base_bdevs_operational": 2,
00:18:19.985    "process": {
00:18:19.985      "type": "rebuild",
00:18:19.985      "target": "spare",
00:18:19.985      "progress": {
00:18:19.985        "blocks": 2560,
00:18:19.985        "percent": 32
00:18:19.985      }
00:18:19.985    },
00:18:19.985    "base_bdevs_list": [
00:18:19.985      {
00:18:19.985        "name": "spare",
00:18:19.985        "uuid": "ddafa007-8eeb-5993-94af-55dc93d6c147",
00:18:19.985        "is_configured": true,
00:18:19.985        "data_offset": 256,
00:18:19.985        "data_size": 7936
00:18:19.985      },
00:18:19.985      {
00:18:19.985        "name": "BaseBdev2",
00:18:19.985        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:19.985        "is_configured": true,
00:18:19.985        "data_offset": 256,
00:18:19.985        "data_size": 7936
00:18:19.985      }
00:18:19.985    ]
00:18:19.985  }'
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]]
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:19.985  [2024-12-16 11:39:45.761142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:19.985  [2024-12-16 11:39:45.826923] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:18:19.985  [2024-12-16 11:39:45.827042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:19.985  [2024-12-16 11:39:45.827077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:19.985  [2024-12-16 11:39:45.827100] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:19.985    11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:19.985    "name": "raid_bdev1",
00:18:19.985    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:19.985    "strip_size_kb": 0,
00:18:19.985    "state": "online",
00:18:19.985    "raid_level": "raid1",
00:18:19.985    "superblock": true,
00:18:19.985    "num_base_bdevs": 2,
00:18:19.985    "num_base_bdevs_discovered": 1,
00:18:19.985    "num_base_bdevs_operational": 1,
00:18:19.985    "base_bdevs_list": [
00:18:19.985      {
00:18:19.985        "name": null,
00:18:19.985        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:19.985        "is_configured": false,
00:18:19.985        "data_offset": 0,
00:18:19.985        "data_size": 7936
00:18:19.985      },
00:18:19.985      {
00:18:19.985        "name": "BaseBdev2",
00:18:19.985        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:19.985        "is_configured": true,
00:18:19.985        "data_offset": 256,
00:18:19.985        "data_size": 7936
00:18:19.985      }
00:18:19.985    ]
00:18:19.985  }'
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:19.985   11:39:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:20.245   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:20.245   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:20.245   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:18:20.245   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none
00:18:20.245   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:20.245    11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:20.245    11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:20.245    11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:20.245    11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:20.245    11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:20.507    "name": "raid_bdev1",
00:18:20.507    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:20.507    "strip_size_kb": 0,
00:18:20.507    "state": "online",
00:18:20.507    "raid_level": "raid1",
00:18:20.507    "superblock": true,
00:18:20.507    "num_base_bdevs": 2,
00:18:20.507    "num_base_bdevs_discovered": 1,
00:18:20.507    "num_base_bdevs_operational": 1,
00:18:20.507    "base_bdevs_list": [
00:18:20.507      {
00:18:20.507        "name": null,
00:18:20.507        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:20.507        "is_configured": false,
00:18:20.507        "data_offset": 0,
00:18:20.507        "data_size": 7936
00:18:20.507      },
00:18:20.507      {
00:18:20.507        "name": "BaseBdev2",
00:18:20.507        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:20.507        "is_configured": true,
00:18:20.507        "data_offset": 256,
00:18:20.507        "data_size": 7936
00:18:20.507      }
00:18:20.507    ]
00:18:20.507  }'
00:18:20.507    11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:18:20.507    11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:20.507   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:20.507  [2024-12-16 11:39:46.397840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:18:20.507  [2024-12-16 11:39:46.397945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:20.507  [2024-12-16 11:39:46.397971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:18:20.507  [2024-12-16 11:39:46.397983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:20.507  [2024-12-16 11:39:46.398143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:20.507  [2024-12-16 11:39:46.398157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:18:20.508  [2024-12-16 11:39:46.398206] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1
00:18:20.508  [2024-12-16 11:39:46.398234] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:18:20.508  [2024-12-16 11:39:46.398243] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:18:20.508  [2024-12-16 11:39:46.398257] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument
00:18:20.508  BaseBdev1
00:18:20.508   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:20.508   11:39:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:21.447    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:21.447    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:21.447    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:21.447    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:21.447    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:21.447    "name": "raid_bdev1",
00:18:21.447    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:21.447    "strip_size_kb": 0,
00:18:21.447    "state": "online",
00:18:21.447    "raid_level": "raid1",
00:18:21.447    "superblock": true,
00:18:21.447    "num_base_bdevs": 2,
00:18:21.447    "num_base_bdevs_discovered": 1,
00:18:21.447    "num_base_bdevs_operational": 1,
00:18:21.447    "base_bdevs_list": [
00:18:21.447      {
00:18:21.447        "name": null,
00:18:21.447        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:21.447        "is_configured": false,
00:18:21.447        "data_offset": 0,
00:18:21.447        "data_size": 7936
00:18:21.447      },
00:18:21.447      {
00:18:21.447        "name": "BaseBdev2",
00:18:21.447        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:21.447        "is_configured": true,
00:18:21.447        "data_offset": 256,
00:18:21.447        "data_size": 7936
00:18:21.447      }
00:18:21.447    ]
00:18:21.447  }'
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:21.447   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:22.017    "name": "raid_bdev1",
00:18:22.017    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:22.017    "strip_size_kb": 0,
00:18:22.017    "state": "online",
00:18:22.017    "raid_level": "raid1",
00:18:22.017    "superblock": true,
00:18:22.017    "num_base_bdevs": 2,
00:18:22.017    "num_base_bdevs_discovered": 1,
00:18:22.017    "num_base_bdevs_operational": 1,
00:18:22.017    "base_bdevs_list": [
00:18:22.017      {
00:18:22.017        "name": null,
00:18:22.017        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:22.017        "is_configured": false,
00:18:22.017        "data_offset": 0,
00:18:22.017        "data_size": 7936
00:18:22.017      },
00:18:22.017      {
00:18:22.017        "name": "BaseBdev2",
00:18:22.017        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:22.017        "is_configured": true,
00:18:22.017        "data_offset": 256,
00:18:22.017        "data_size": 7936
00:18:22.017      }
00:18:22.017    ]
00:18:22.017  }'
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:22.017    11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:22.017   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:22.018  [2024-12-16 11:39:47.931313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:22.018  [2024-12-16 11:39:47.931503] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5)
00:18:22.018  [2024-12-16 11:39:47.931516] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid
00:18:22.018  request:
00:18:22.018  {
00:18:22.018  "base_bdev": "BaseBdev1",
00:18:22.018  "raid_bdev": "raid_bdev1",
00:18:22.018  "method": "bdev_raid_add_base_bdev",
00:18:22.018  "req_id": 1
00:18:22.018  }
00:18:22.018  Got JSON-RPC error response
00:18:22.018  response:
00:18:22.018  {
00:18:22.018  "code": -22,
00:18:22.018  "message": "Failed to add base bdev to RAID bdev: Invalid argument"
00:18:22.018  }
00:18:22.018   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:18:22.018   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1
00:18:22.018   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 ))
00:18:22.018   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:18:22.018   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:18:22.018   11:39:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp
00:18:22.957    11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:22.957    11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:22.957    11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:22.957    11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:22.957    11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{
00:18:22.957    "name": "raid_bdev1",
00:18:22.957    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:22.957    "strip_size_kb": 0,
00:18:22.957    "state": "online",
00:18:22.957    "raid_level": "raid1",
00:18:22.957    "superblock": true,
00:18:22.957    "num_base_bdevs": 2,
00:18:22.957    "num_base_bdevs_discovered": 1,
00:18:22.957    "num_base_bdevs_operational": 1,
00:18:22.957    "base_bdevs_list": [
00:18:22.957      {
00:18:22.957        "name": null,
00:18:22.957        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:22.957        "is_configured": false,
00:18:22.957        "data_offset": 0,
00:18:22.957        "data_size": 7936
00:18:22.957      },
00:18:22.957      {
00:18:22.957        "name": "BaseBdev2",
00:18:22.957        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:22.957        "is_configured": true,
00:18:22.957        "data_offset": 256,
00:18:22.957        "data_size": 7936
00:18:22.957      }
00:18:22.957    ]
00:18:22.957  }'
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable
00:18:22.957   11:39:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{
00:18:23.527    "name": "raid_bdev1",
00:18:23.527    "uuid": "f2560191-3e0d-4373-96cc-529020f3f775",
00:18:23.527    "strip_size_kb": 0,
00:18:23.527    "state": "online",
00:18:23.527    "raid_level": "raid1",
00:18:23.527    "superblock": true,
00:18:23.527    "num_base_bdevs": 2,
00:18:23.527    "num_base_bdevs_discovered": 1,
00:18:23.527    "num_base_bdevs_operational": 1,
00:18:23.527    "base_bdevs_list": [
00:18:23.527      {
00:18:23.527        "name": null,
00:18:23.527        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:23.527        "is_configured": false,
00:18:23.527        "data_offset": 0,
00:18:23.527        "data_size": 7936
00:18:23.527      },
00:18:23.527      {
00:18:23.527        "name": "BaseBdev2",
00:18:23.527        "uuid": "47bc1033-3ff6-5916-9835-744b82b49aec",
00:18:23.527        "is_configured": true,
00:18:23.527        "data_offset": 256,
00:18:23.527        "data_size": 7936
00:18:23.527      }
00:18:23.527    ]
00:18:23.527  }'
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"'
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]]
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"'
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]]
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99706
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99706 ']'
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99706
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:18:23.527    11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99706
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:18:23.527  killing process with pid 99706
00:18:23.527  Received shutdown signal, test time was about 60.000000 seconds
00:18:23.527  
00:18:23.527                                                                                                  Latency(us)
00:18:23.527  
[2024-12-16T11:39:49.594Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:23.527  
[2024-12-16T11:39:49.594Z]  ===================================================================================================================
00:18:23.527  
[2024-12-16T11:39:49.594Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99706'
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99706
00:18:23.527  [2024-12-16 11:39:49.518154] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:23.527  [2024-12-16 11:39:49.518298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:23.527   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99706
00:18:23.527  [2024-12-16 11:39:49.518352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:23.527  [2024-12-16 11:39:49.518361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline
00:18:23.527  [2024-12-16 11:39:49.551528] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:23.787   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0
00:18:23.787  
00:18:23.787  real	0m15.725s
00:18:23.787  user	0m20.810s
00:18:23.787  sys	0m1.542s
00:18:23.787  ************************************
00:18:23.787  END TEST raid_rebuild_test_sb_md_interleaved
00:18:23.787   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:23.787   11:39:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x
00:18:23.787  ************************************
00:18:23.787   11:39:49 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT
00:18:23.787   11:39:49 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup
00:18:23.787   11:39:49 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99706 ']'
00:18:23.787   11:39:49 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99706
00:18:24.047   11:39:49 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest
00:18:24.047  ************************************
00:18:24.047  END TEST bdev_raid
00:18:24.047  ************************************
00:18:24.047  
00:18:24.047  real	10m12.789s
00:18:24.047  user	14m36.827s
00:18:24.047  sys	1m51.043s
00:18:24.047   11:39:49 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:24.047   11:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x
00:18:24.047   11:39:49  -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh
00:18:24.047   11:39:49  -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']'
00:18:24.047   11:39:49  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:24.047   11:39:49  -- common/autotest_common.sh@10 -- # set +x
00:18:24.047  ************************************
00:18:24.047  START TEST spdkcli_raid
00:18:24.047  ************************************
00:18:24.047   11:39:49 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh
00:18:24.047  * Looking for test storage...
00:18:24.047  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:18:24.047    11:39:50 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:18:24.047     11:39:50 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version
00:18:24.047     11:39:50 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:18:24.307    11:39:50 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-:
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-:
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<'
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@345 -- # : 1
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=1
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 1
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=2
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:24.307     11:39:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 2
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:24.307    11:39:50 spdkcli_raid -- scripts/common.sh@368 -- # return 0
00:18:24.307    11:39:50 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:24.307    11:39:50 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:18:24.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:24.307  		--rc genhtml_branch_coverage=1
00:18:24.307  		--rc genhtml_function_coverage=1
00:18:24.307  		--rc genhtml_legend=1
00:18:24.307  		--rc geninfo_all_blocks=1
00:18:24.307  		--rc geninfo_unexecuted_blocks=1
00:18:24.307  		
00:18:24.307  		'
00:18:24.307    11:39:50 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:18:24.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:24.307  		--rc genhtml_branch_coverage=1
00:18:24.307  		--rc genhtml_function_coverage=1
00:18:24.307  		--rc genhtml_legend=1
00:18:24.307  		--rc geninfo_all_blocks=1
00:18:24.307  		--rc geninfo_unexecuted_blocks=1
00:18:24.307  		
00:18:24.307  		'
00:18:24.307    11:39:50 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:18:24.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:24.307  		--rc genhtml_branch_coverage=1
00:18:24.307  		--rc genhtml_function_coverage=1
00:18:24.307  		--rc genhtml_legend=1
00:18:24.307  		--rc geninfo_all_blocks=1
00:18:24.307  		--rc geninfo_unexecuted_blocks=1
00:18:24.307  		
00:18:24.307  		'
00:18:24.307    11:39:50 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:18:24.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:24.307  		--rc genhtml_branch_coverage=1
00:18:24.307  		--rc genhtml_function_coverage=1
00:18:24.307  		--rc genhtml_legend=1
00:18:24.307  		--rc geninfo_all_blocks=1
00:18:24.307  		--rc geninfo_unexecuted_blocks=1
00:18:24.307  		
00:18:24.307  		'
00:18:24.307   11:39:50 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:18:24.307    11:39:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:18:24.307    11:39:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:18:24.307   11:39:50 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE")
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2
00:18:24.307    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}")
00:18:24.308    11:39:50 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs
00:18:24.308     11:39:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh
00:18:24.308    11:39:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:18:24.308    11:39:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:18:24.308    11:39:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100368
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:18:24.308   11:39:50 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100368
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100368 ']'
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:24.308  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable
00:18:24.308   11:39:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:24.308  [2024-12-16 11:39:50.277961] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:24.308  [2024-12-16 11:39:50.278182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100368 ]
00:18:24.567  [2024-12-16 11:39:50.440693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:18:24.567  [2024-12-16 11:39:50.486254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:24.567  [2024-12-16 11:39:50.486358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:18:25.135   11:39:51 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:18:25.135   11:39:51 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0
00:18:25.135   11:39:51 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt
00:18:25.135   11:39:51 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable
00:18:25.135   11:39:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:25.135   11:39:51 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc
00:18:25.135   11:39:51 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable
00:18:25.135   11:39:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:25.135   11:39:51 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True
00:18:25.136  '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True
00:18:25.136  '
00:18:27.042  Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True]
00:18:27.043  Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True]
00:18:27.043   11:39:52 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc
00:18:27.043   11:39:52 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable
00:18:27.043   11:39:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:27.043   11:39:52 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid
00:18:27.043   11:39:52 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable
00:18:27.043   11:39:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:27.043   11:39:52 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True
00:18:27.043  '
00:18:27.980  Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True]
00:18:27.980   11:39:53 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid
00:18:27.980   11:39:53 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable
00:18:27.980   11:39:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:27.980   11:39:54 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match
00:18:27.980   11:39:54 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable
00:18:27.980   11:39:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:27.980   11:39:54 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match
00:18:27.980   11:39:54 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs
00:18:28.548   11:39:54 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match
00:18:28.548   11:39:54 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test
00:18:28.548   11:39:54 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match
00:18:28.548   11:39:54 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable
00:18:28.548   11:39:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:28.808   11:39:54 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid
00:18:28.808   11:39:54 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable
00:18:28.808   11:39:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:28.808   11:39:54 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True
00:18:28.808  '
00:18:29.746  Executing command: ['/bdevs/raid_volume delete testraid', '', True]
00:18:29.746   11:39:55 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid
00:18:29.746   11:39:55 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable
00:18:29.746   11:39:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:29.746   11:39:55 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc
00:18:29.746   11:39:55 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable
00:18:29.746   11:39:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:29.746   11:39:55 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True
00:18:29.746  '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True
00:18:29.746  '
00:18:31.125  Executing command: ['/bdevs/malloc delete Malloc1', '', True]
00:18:31.125  Executing command: ['/bdevs/malloc delete Malloc2', '', True]
00:18:31.385   11:39:57 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:31.385   11:39:57 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100368
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100368 ']'
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100368
00:18:31.385    11:39:57 spdkcli_raid -- common/autotest_common.sh@955 -- # uname
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:18:31.385    11:39:57 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100368
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100368'
00:18:31.385  killing process with pid 100368
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100368
00:18:31.385   11:39:57 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100368
00:18:31.954   11:39:57 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup
00:18:31.954   11:39:57 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100368 ']'
00:18:31.954   11:39:57 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100368
00:18:31.954   11:39:57 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100368 ']'
00:18:31.954   11:39:57 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100368
00:18:31.954  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100368) - No such process
00:18:31.954   11:39:57 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100368 is not found'
00:18:31.954  Process with pid 100368 is not found
00:18:31.954   11:39:57 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']'
00:18:31.954   11:39:57 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']'
00:18:31.954   11:39:57 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']'
00:18:31.954   11:39:57 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio
00:18:31.954  
00:18:31.954  real	0m7.796s
00:18:31.954  user	0m16.470s
00:18:31.954  sys	0m1.126s
00:18:31.954   11:39:57 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:31.954   11:39:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x
00:18:31.954  ************************************
00:18:31.954  END TEST spdkcli_raid
00:18:31.954  ************************************
00:18:31.954   11:39:57  -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f
00:18:31.954   11:39:57  -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:18:31.954   11:39:57  -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:31.954   11:39:57  -- common/autotest_common.sh@10 -- # set +x
00:18:31.954  ************************************
00:18:31.954  START TEST blockdev_raid5f
00:18:31.954  ************************************
00:18:31.954   11:39:57 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f
00:18:31.954  * Looking for test storage...
00:18:31.955  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:18:31.955    11:39:57 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]]
00:18:31.955     11:39:57 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version
00:18:31.955     11:39:57 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}'
00:18:31.955    11:39:57 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-:
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-:
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<'
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@345 -- # : 1
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:31.955    11:39:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:31.955     11:39:57 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1
00:18:31.955     11:39:57 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1
00:18:31.955     11:39:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:31.955     11:39:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1
00:18:31.955    11:39:58 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1
00:18:31.955     11:39:58 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2
00:18:31.955     11:39:58 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2
00:18:31.955     11:39:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:31.955     11:39:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2
00:18:31.955    11:39:58 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2
00:18:31.955    11:39:58 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:31.955    11:39:58 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:31.955    11:39:58 blockdev_raid5f -- scripts/common.sh@368 -- # return 0
00:18:31.955    11:39:58 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:31.955    11:39:58 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS=
00:18:31.955  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:31.955  		--rc genhtml_branch_coverage=1
00:18:31.955  		--rc genhtml_function_coverage=1
00:18:31.955  		--rc genhtml_legend=1
00:18:31.955  		--rc geninfo_all_blocks=1
00:18:31.955  		--rc geninfo_unexecuted_blocks=1
00:18:31.955  		
00:18:31.955  		'
00:18:31.955    11:39:58 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS='
00:18:31.955  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:31.955  		--rc genhtml_branch_coverage=1
00:18:31.955  		--rc genhtml_function_coverage=1
00:18:31.955  		--rc genhtml_legend=1
00:18:31.955  		--rc geninfo_all_blocks=1
00:18:31.955  		--rc geninfo_unexecuted_blocks=1
00:18:31.955  		
00:18:31.955  		'
00:18:31.955    11:39:58 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 
00:18:31.955  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:31.955  		--rc genhtml_branch_coverage=1
00:18:31.955  		--rc genhtml_function_coverage=1
00:18:31.955  		--rc genhtml_legend=1
00:18:31.955  		--rc geninfo_all_blocks=1
00:18:31.955  		--rc geninfo_unexecuted_blocks=1
00:18:31.955  		
00:18:31.955  		'
00:18:31.955    11:39:58 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 
00:18:31.955  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:31.955  		--rc genhtml_branch_coverage=1
00:18:31.955  		--rc genhtml_function_coverage=1
00:18:31.955  		--rc genhtml_legend=1
00:18:31.955  		--rc geninfo_all_blocks=1
00:18:31.955  		--rc geninfo_unexecuted_blocks=1
00:18:31.955  		
00:18:31.955  		'
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:18:31.955    11:39:58 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:18:31.955   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@20 -- # :
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:18:32.215    11:39:58 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device=
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek=
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx=
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]]
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]]
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100627
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:18:32.215   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100627
00:18:32.215   11:39:58 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100627 ']'
00:18:32.215   11:39:58 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:32.215   11:39:58 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100
00:18:32.215   11:39:58 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:32.215  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:32.215   11:39:58 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable
00:18:32.215   11:39:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:32.215  [2024-12-16 11:39:58.131582] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:32.216  [2024-12-16 11:39:58.131804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100627 ]
00:18:32.475  [2024-12-16 11:39:58.290596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:32.475  [2024-12-16 11:39:58.334768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:33.044   11:39:58 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:18:33.044   11:39:58 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0
00:18:33.044   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:18:33.044   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf
00:18:33.044   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd
00:18:33.044   11:39:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:33.044   11:39:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:33.044  Malloc0
00:18:33.044  Malloc1
00:18:33.044  Malloc2
00:18:33.044   11:39:58 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:33.044   11:39:58 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:18:33.044   11:39:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:33.044   11:39:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:33.044   11:39:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:33.044   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat
00:18:33.044    11:39:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:33.044    11:39:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:33.044    11:39:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:33.044   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:18:33.044    11:39:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable
00:18:33.044    11:39:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:18:33.044    11:39:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:33.304    11:39:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:33.304   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:18:33.304    11:39:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "raid5f",' '  "aliases": [' '    "6d02a670-2ea4-49c6-a073-804ec155a4dc"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "6d02a670-2ea4-49c6-a073-804ec155a4dc",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "raid": {' '      "uuid": "6d02a670-2ea4-49c6-a073-804ec155a4dc",' '      "strip_size_kb": 2,' '      "state": "online",' '      "raid_level": "raid5f",' '      "superblock": false,' '      "num_base_bdevs": 3,' '      "num_base_bdevs_discovered": 3,' '      "num_base_bdevs_operational": 3,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc0",' '          "uuid": "9438e0eb-a816-4225-acbb-e4652759a56f",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc1",' '          "uuid": "eec5e47c-4fcb-4300-baa2-dad3d6653c00",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc2",' '          "uuid": "552b49be-c53f-485f-a117-cba5103c0b25",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}'
00:18:33.304    11:39:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name
00:18:33.304   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:18:33.304   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f
00:18:33.304   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:18:33.304   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100627
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100627 ']'
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100627
00:18:33.304    11:39:59 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:18:33.304    11:39:59 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100627
00:18:33.304  killing process with pid 100627
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100627'
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100627
00:18:33.304   11:39:59 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100627
00:18:33.572   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:18:33.572   11:39:59 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f ''
00:18:33.572   11:39:59 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']'
00:18:33.572   11:39:59 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:33.572   11:39:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:33.844  ************************************
00:18:33.844  START TEST bdev_hello_world
00:18:33.844  ************************************
00:18:33.844   11:39:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f ''
00:18:33.844  [2024-12-16 11:39:59.712376] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:33.844  [2024-12-16 11:39:59.712595] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100661 ]
00:18:33.844  [2024-12-16 11:39:59.869121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:34.104  [2024-12-16 11:39:59.918840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:34.104  [2024-12-16 11:40:00.106652] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:18:34.104  [2024-12-16 11:40:00.106783] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f
00:18:34.104  [2024-12-16 11:40:00.106829] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:18:34.104  [2024-12-16 11:40:00.107183] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:18:34.104  [2024-12-16 11:40:00.107390] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:18:34.104  [2024-12-16 11:40:00.107426] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:18:34.104  [2024-12-16 11:40:00.107494] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:18:34.104  
00:18:34.104  [2024-12-16 11:40:00.107516] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:18:34.363  
00:18:34.363  real	0m0.731s
00:18:34.363  user	0m0.406s
00:18:34.363  sys	0m0.208s
00:18:34.363  ************************************
00:18:34.363  END TEST bdev_hello_world
00:18:34.363  ************************************
00:18:34.363   11:40:00 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:34.363   11:40:00 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:18:34.364   11:40:00 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:18:34.364   11:40:00 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:18:34.364   11:40:00 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:34.364   11:40:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:34.364  ************************************
00:18:34.364  START TEST bdev_bounds
00:18:34.364  ************************************
00:18:34.364   11:40:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds ''
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100692
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:18:34.623  Process bdevio pid: 100692
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100692'
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100692
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100692 ']'
00:18:34.623  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable
00:18:34.623   11:40:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:34.623  [2024-12-16 11:40:00.509589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:34.623  [2024-12-16 11:40:00.509734] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100692 ]
00:18:34.623  [2024-12-16 11:40:00.669078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3
00:18:34.882  [2024-12-16 11:40:00.716156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:18:34.882  [2024-12-16 11:40:00.716255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:34.882  [2024-12-16 11:40:00.716368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2
00:18:35.450   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:18:35.450   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0
00:18:35.450   11:40:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:18:35.450  I/O targets:
00:18:35.450    raid5f: 131072 blocks of 512 bytes (64 MiB)
00:18:35.450  
00:18:35.450  
00:18:35.450       CUnit - A unit testing framework for C - Version 2.1-3
00:18:35.450       http://cunit.sourceforge.net/
00:18:35.450  
00:18:35.450  
00:18:35.450  Suite: bdevio tests on: raid5f
00:18:35.450    Test: blockdev write read block ...passed
00:18:35.450    Test: blockdev write zeroes read block ...passed
00:18:35.450    Test: blockdev write zeroes read no split ...passed
00:18:35.450    Test: blockdev write zeroes read split ...passed
00:18:35.709    Test: blockdev write zeroes read split partial ...passed
00:18:35.709    Test: blockdev reset ...passed
00:18:35.709    Test: blockdev write read 8 blocks ...passed
00:18:35.709    Test: blockdev write read size > 128k ...passed
00:18:35.709    Test: blockdev write read invalid size ...passed
00:18:35.709    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:35.709    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:35.709    Test: blockdev write read max offset ...passed
00:18:35.709    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:35.709    Test: blockdev writev readv 8 blocks ...passed
00:18:35.709    Test: blockdev writev readv 30 x 1block ...passed
00:18:35.709    Test: blockdev writev readv block ...passed
00:18:35.709    Test: blockdev writev readv size > 128k ...passed
00:18:35.709    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:35.709    Test: blockdev comparev and writev ...passed
00:18:35.709    Test: blockdev nvme passthru rw ...passed
00:18:35.709    Test: blockdev nvme passthru vendor specific ...passed
00:18:35.709    Test: blockdev nvme admin passthru ...passed
00:18:35.709    Test: blockdev copy ...passed
00:18:35.709  
00:18:35.709  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:18:35.709                suites      1      1    n/a      0        0
00:18:35.709                 tests     23     23     23      0        0
00:18:35.709               asserts    130    130    130      0      n/a
00:18:35.709  
00:18:35.709  Elapsed time =    0.304 seconds
00:18:35.709  0
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100692
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100692 ']'
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100692
00:18:35.709    11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:18:35.709    11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100692
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100692'
00:18:35.709  killing process with pid 100692
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100692
00:18:35.709   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100692
00:18:35.967   11:40:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:18:35.967  
00:18:35.967  real	0m1.446s
00:18:35.967  user	0m3.455s
00:18:35.967  sys	0m0.322s
00:18:35.967   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:35.967   11:40:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:35.967  ************************************
00:18:35.967  END TEST bdev_bounds
00:18:35.967  ************************************
00:18:35.967   11:40:01 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f ''
00:18:35.967   11:40:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']'
00:18:35.967   11:40:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:35.967   11:40:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:35.967  ************************************
00:18:35.967  START TEST bdev_nbd
00:18:35.967  ************************************
00:18:35.967   11:40:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f ''
00:18:35.967    11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:18:35.967   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f')
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0')
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f')
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100735
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100735 /var/tmp/spdk-nbd.sock
00:18:35.968  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100735 ']'
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable
00:18:35.968   11:40:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:36.226  [2024-12-16 11:40:02.035237] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:36.226  [2024-12-16 11:40:02.035372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:36.226  [2024-12-16 11:40:02.194908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:36.226  [2024-12-16 11:40:02.243145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 ))
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f')
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f')
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:18:37.163   11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:18:37.163    11:40:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:18:37.163    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:37.163  1+0 records in
00:18:37.163  1+0 records out
00:18:37.163  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524991 s, 7.8 MB/s
00:18:37.163    11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:37.163   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:18:37.163    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:18:37.423    {
00:18:37.423      "nbd_device": "/dev/nbd0",
00:18:37.423      "bdev_name": "raid5f"
00:18:37.423    }
00:18:37.423  ]'
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:18:37.423    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:18:37.423    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:18:37.423    {
00:18:37.423      "nbd_device": "/dev/nbd0",
00:18:37.423      "bdev_name": "raid5f"
00:18:37.423    }
00:18:37.423  ]'
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:37.423   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:37.682    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:37.682   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:37.682   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:37.682   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:37.682   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:37.682   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:37.682   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:37.682   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:37.682    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:37.682    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:37.682     11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:37.942    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:37.942     11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:37.942     11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:37.942    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:37.942     11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:37.942     11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:37.942     11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:37.942    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:37.942    11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f')
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f')
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:37.942   11:40:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0
00:18:38.201  /dev/nbd0
00:18:38.201    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 ))
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 ))
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 ))
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 ))
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:38.201  1+0 records in
00:18:38.201  1+0 records out
00:18:38.201  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497433 s, 8.2 MB/s
00:18:38.201    11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']'
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:38.201   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:38.201    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:38.201    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:38.201     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:38.461    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:18:38.461    {
00:18:38.461      "nbd_device": "/dev/nbd0",
00:18:38.461      "bdev_name": "raid5f"
00:18:38.461    }
00:18:38.461  ]'
00:18:38.461     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:38.461     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:18:38.461    {
00:18:38.461      "nbd_device": "/dev/nbd0",
00:18:38.461      "bdev_name": "raid5f"
00:18:38.461    }
00:18:38.461  ]'
00:18:38.461    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:18:38.461     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:38.461     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:18:38.461    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1
00:18:38.461    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:18:38.461  256+0 records in
00:18:38.461  256+0 records out
00:18:38.461  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144616 s, 72.5 MB/s
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:18:38.461  256+0 records in
00:18:38.461  256+0 records out
00:18:38.461  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277849 s, 37.7 MB/s
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:38.461   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:38.720    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:38.721   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:38.721   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:38.721   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:38.721   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:38.721   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:38.721   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:38.721   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:38.721    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:38.721    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:38.721     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:38.980    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:38.980     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:38.980     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:38.980    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:38.980     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:38.980     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:38.980     11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:38.980    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:38.980    11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:38.980   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:18:38.980   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:18:38.980   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:18:38.980   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:38.980   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:38.980   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:18:38.980   11:40:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:18:39.240  malloc_lvol_verify
00:18:39.240   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:18:39.499  22e37fa4-a1e5-4dcb-bd58-6547360e499a
00:18:39.499   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:18:39.499  8edf8672-226f-4ecf-903b-3d2254ab799b
00:18:39.499   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:18:39.759  /dev/nbd0
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:18:39.759  mke2fs 1.47.0 (5-Feb-2023)
00:18:39.759  Discarding device blocks:    0/4096         done                            
00:18:39.759  Creating filesystem with 4096 1k blocks and 1024 inodes
00:18:39.759  
00:18:39.759  Allocating group tables: 0/1   done                            
00:18:39.759  Writing inode tables: 0/1   done                            
00:18:39.759  Creating journal (1024 blocks): done
00:18:39.759  Writing superblocks and filesystem accounting information: 0/1   done
00:18:39.759  
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:39.759   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:40.018    11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100735
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100735 ']'
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100735
00:18:40.018    11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']'
00:18:40.018    11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100735
00:18:40.018  killing process with pid 100735
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']'
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100735'
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100735
00:18:40.018   11:40:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100735
00:18:40.277   11:40:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:18:40.277  
00:18:40.277  real	0m4.324s
00:18:40.277  user	0m6.341s
00:18:40.277  sys	0m1.198s
00:18:40.277   11:40:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:40.277   11:40:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:40.277  ************************************
00:18:40.277  END TEST bdev_nbd
00:18:40.277  ************************************
00:18:40.277   11:40:06 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:18:40.277   11:40:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']'
00:18:40.277   11:40:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']'
00:18:40.277   11:40:06 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite ''
00:18:40.277   11:40:06 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']'
00:18:40.277   11:40:06 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:40.277   11:40:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:40.277  ************************************
00:18:40.277  START TEST bdev_fio
00:18:40.277  ************************************
00:18:40.277   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite ''
00:18:40.277  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:18:40.277   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:18:40.277   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:18:40.277   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:18:40.277    11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:18:40.277    11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context=
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']'
00:18:40.540    11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:18:40.540  ************************************
00:18:40.540  START TEST bdev_fio_rw_verify
00:18:40.540  ************************************
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib=
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}"
00:18:40.540    11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan
00:18:40.540    11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:40.540    11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:40.540   11:40:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:40.804  job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:18:40.804  fio-3.35
00:18:40.804  Starting 1 thread
00:18:53.023  
00:18:53.023  job_raid5f: (groupid=0, jobs=1): err= 0: pid=100926: Mon Dec 16 11:40:17 2024
00:18:53.023    read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10001msec)
00:18:53.023      slat (usec): min=16, max=441, avg=20.40, stdev= 2.98
00:18:53.023      clat (usec): min=9, max=661, avg=138.35, stdev=50.24
00:18:53.023       lat (usec): min=28, max=681, avg=158.75, stdev=50.94
00:18:53.023      clat percentiles (usec):
00:18:53.023       | 50.000th=[  141], 99.000th=[  251], 99.900th=[  285], 99.990th=[  347],
00:18:53.023       | 99.999th=[  644]
00:18:53.023    write: IOPS=12.0k, BW=47.1MiB/s (49.3MB/s)(464MiB/9870msec); 0 zone resets
00:18:53.023      slat (usec): min=7, max=222, avg=18.00, stdev= 4.06
00:18:53.023      clat (usec): min=55, max=1739, avg=317.37, stdev=54.68
00:18:53.023       lat (usec): min=71, max=1946, avg=335.37, stdev=56.58
00:18:53.023      clat percentiles (usec):
00:18:53.023       | 50.000th=[  318], 99.000th=[  457], 99.900th=[  644], 99.990th=[ 1500],
00:18:53.023       | 99.999th=[ 1729]
00:18:53.023     bw (  KiB/s): min=41496, max=53160, per=99.67%, avg=48022.32, stdev=3070.13, samples=19
00:18:53.023     iops        : min=10374, max=13290, avg=12005.58, stdev=767.53, samples=19
00:18:53.023    lat (usec)   : 10=0.01%, 20=0.01%, 50=0.01%, 100=12.91%, 250=39.79%
00:18:53.023    lat (usec)   : 500=47.13%, 750=0.13%, 1000=0.02%
00:18:53.023    lat (msec)   : 2=0.02%
00:18:53.023    cpu          : usr=99.04%, sys=0.35%, ctx=39, majf=0, minf=12583
00:18:53.023    IO depths    : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0%
00:18:53.023       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:53.023       complete  : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:53.023       issued rwts: total=115211,118887,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:53.023       latency   : target=0, window=0, percentile=100.00%, depth=8
00:18:53.023  
00:18:53.023  Run status group 0 (all jobs):
00:18:53.023     READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10001-10001msec
00:18:53.023    WRITE: bw=47.1MiB/s (49.3MB/s), 47.1MiB/s-47.1MiB/s (49.3MB/s-49.3MB/s), io=464MiB (487MB), run=9870-9870msec
00:18:53.023  -----------------------------------------------------
00:18:53.023  Suppressions used:
00:18:53.023    count      bytes template
00:18:53.023        1          7 /usr/src/fio/parse.c
00:18:53.023      121      11616 /usr/src/fio/iolog.c
00:18:53.023        1          8 libtcmalloc_minimal.so
00:18:53.023        1        904 libcrypto.so
00:18:53.023  -----------------------------------------------------
00:18:53.023  
00:18:53.023  ************************************
00:18:53.023  END TEST bdev_fio_rw_verify
00:18:53.023  ************************************
00:18:53.023  
00:18:53.023  real	0m11.208s
00:18:53.023  user	0m11.386s
00:18:53.023  sys	0m0.644s
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context=
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']'
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']'
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']'
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']'
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite
00:18:53.023    11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:18:53.023    11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "raid5f",' '  "aliases": [' '    "6d02a670-2ea4-49c6-a073-804ec155a4dc"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "6d02a670-2ea4-49c6-a073-804ec155a4dc",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "raid": {' '      "uuid": "6d02a670-2ea4-49c6-a073-804ec155a4dc",' '      "strip_size_kb": 2,' '      "state": "online",' '      "raid_level": "raid5f",' '      "superblock": false,' '      "num_base_bdevs": 3,' '      "num_base_bdevs_discovered": 3,' '      "num_base_bdevs_operational": 3,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc0",' '          "uuid": "9438e0eb-a816-4225-acbb-e4652759a56f",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc1",' '          "uuid": "eec5e47c-4fcb-4300-baa2-dad3d6653c00",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc2",' '          "uuid": "552b49be-c53f-485f-a117-cba5103c0b25",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}'
00:18:53.023   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]]
00:18:53.024   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:53.024  /home/vagrant/spdk_repo/spdk
00:18:53.024  ************************************
00:18:53.024  END TEST bdev_fio
00:18:53.024  ************************************
00:18:53.024   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd
00:18:53.024   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT
00:18:53.024   11:40:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0
00:18:53.024  
00:18:53.024  real	0m11.476s
00:18:53.024  user	0m11.507s
00:18:53.024  sys	0m0.768s
00:18:53.024   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:53.024   11:40:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:18:53.024   11:40:17 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:18:53.024   11:40:17 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:18:53.024   11:40:17 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']'
00:18:53.024   11:40:17 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:53.024   11:40:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:53.024  ************************************
00:18:53.024  START TEST bdev_verify
00:18:53.024  ************************************
00:18:53.024   11:40:17 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:18:53.024  [2024-12-16 11:40:17.958207] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:53.024  [2024-12-16 11:40:17.958331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101079 ]
00:18:53.024  [2024-12-16 11:40:18.106430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:18:53.024  [2024-12-16 11:40:18.158686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:53.024  [2024-12-16 11:40:18.158804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:18:53.024  Running I/O for 5 seconds...
00:18:54.530      13435.00 IOPS,    52.48 MiB/s
[2024-12-16T11:40:21.535Z]     15362.00 IOPS,    60.01 MiB/s
[2024-12-16T11:40:22.478Z]     16381.00 IOPS,    63.99 MiB/s
[2024-12-16T11:40:23.439Z]     16043.00 IOPS,    62.67 MiB/s
[2024-12-16T11:40:23.439Z]     16111.40 IOPS,    62.94 MiB/s
00:18:57.372                                                                                                  Latency(us)
00:18:57.372  
[2024-12-16T11:40:23.439Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:57.372  Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:18:57.372  	 Verification LBA range: start 0x0 length 0x2000
00:18:57.372  	 raid5f              :       5.01    8030.93      31.37       0.00     0.00   23819.40    1624.09   24726.25
00:18:57.372  Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:18:57.372  	 Verification LBA range: start 0x2000 length 0x2000
00:18:57.372  	 raid5f              :       5.01    8050.05      31.45       0.00     0.00   23938.35     184.23   24497.30
00:18:57.372  
[2024-12-16T11:40:23.439Z]  ===================================================================================================================
00:18:57.372  
[2024-12-16T11:40:23.439Z]  Total                       :              16080.98      62.82       0.00     0.00   23878.96     184.23   24726.25
00:18:57.631  
00:18:57.631  real	0m5.734s
00:18:57.631  user	0m10.663s
00:18:57.631  sys	0m0.235s
00:18:57.631  ************************************
00:18:57.631  END TEST bdev_verify
00:18:57.631  ************************************
00:18:57.631   11:40:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable
00:18:57.631   11:40:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:18:57.631   11:40:23 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:18:57.631   11:40:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']'
00:18:57.631   11:40:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:18:57.631   11:40:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:18:57.631  ************************************
00:18:57.631  START TEST bdev_verify_big_io
00:18:57.631  ************************************
00:18:57.631   11:40:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:18:57.891  [2024-12-16 11:40:23.756934] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:18:57.891  [2024-12-16 11:40:23.757109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101160 ]
00:18:57.891  [2024-12-16 11:40:23.914277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2
00:18:58.150  [2024-12-16 11:40:23.958627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:18:58.150  [2024-12-16 11:40:23.958731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1
00:18:58.150  Running I/O for 5 seconds...
00:19:00.465        760.00 IOPS,    47.50 MiB/s
[2024-12-16T11:40:27.470Z]       761.00 IOPS,    47.56 MiB/s
[2024-12-16T11:40:28.407Z]       846.00 IOPS,    52.88 MiB/s
[2024-12-16T11:40:29.346Z]       856.00 IOPS,    53.50 MiB/s
[2024-12-16T11:40:29.346Z]       863.20 IOPS,    53.95 MiB/s
00:19:03.279                                                                                                  Latency(us)
00:19:03.279  
[2024-12-16T11:40:29.346Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:03.279  Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:03.279  	 Verification LBA range: start 0x0 length 0x200
00:19:03.279  	 raid5f              :       5.15     443.71      27.73       0.00     0.00 7083291.83     225.37  329683.28
00:19:03.279  Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:03.279  	 Verification LBA range: start 0x200 length 0x200
00:19:03.279  	 raid5f              :       5.16     442.74      27.67       0.00     0.00 7146639.65     141.30  331514.86
00:19:03.279  
[2024-12-16T11:40:29.346Z]  ===================================================================================================================
00:19:03.279  
[2024-12-16T11:40:29.346Z]  Total                       :                886.45      55.40       0.00     0.00 7114965.74     141.30  331514.86
00:19:03.538  
00:19:03.538  real	0m5.882s
00:19:03.538  user	0m10.962s
00:19:03.538  sys	0m0.231s
00:19:03.538   11:40:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable
00:19:03.538   11:40:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:19:03.538  ************************************
00:19:03.538  END TEST bdev_verify_big_io
00:19:03.538  ************************************
00:19:03.798   11:40:29 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:03.798   11:40:29 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']'
00:19:03.798   11:40:29 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:19:03.798   11:40:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:19:03.798  ************************************
00:19:03.798  START TEST bdev_write_zeroes
00:19:03.798  ************************************
00:19:03.798   11:40:29 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:03.798  [2024-12-16 11:40:29.703298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:19:03.798  [2024-12-16 11:40:29.703425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101242 ]
00:19:03.798  [2024-12-16 11:40:29.852993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:04.057  [2024-12-16 11:40:29.900449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:19:04.057  Running I/O for 1 seconds...
00:19:05.437      27759.00 IOPS,   108.43 MiB/s
00:19:05.437                                                                                                  Latency(us)
00:19:05.437  
[2024-12-16T11:40:31.504Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:05.437  Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:05.437  	 raid5f              :       1.01   27723.27     108.29       0.00     0.00    4602.69    1466.69    6238.80
00:19:05.437  
[2024-12-16T11:40:31.504Z]  ===================================================================================================================
00:19:05.437  
[2024-12-16T11:40:31.504Z]  Total                       :              27723.27     108.29       0.00     0.00    4602.69    1466.69    6238.80
00:19:05.437  
00:19:05.437  real	0m1.722s
00:19:05.437  user	0m1.391s
00:19:05.437  sys	0m0.211s
00:19:05.437  ************************************
00:19:05.437  END TEST bdev_write_zeroes
00:19:05.437  ************************************
00:19:05.437   11:40:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable
00:19:05.437   11:40:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:19:05.437   11:40:31 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:05.437   11:40:31 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']'
00:19:05.437   11:40:31 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:19:05.437   11:40:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:19:05.437  ************************************
00:19:05.437  START TEST bdev_json_nonenclosed
00:19:05.437  ************************************
00:19:05.437   11:40:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:05.437  [2024-12-16 11:40:31.494180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:19:05.437  [2024-12-16 11:40:31.494289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101279 ]
00:19:05.696  [2024-12-16 11:40:31.655153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:05.696  [2024-12-16 11:40:31.703384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:19:05.696  [2024-12-16 11:40:31.703507] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:19:05.696  [2024-12-16 11:40:31.703535] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:05.696  [2024-12-16 11:40:31.703570] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:05.956  
00:19:05.956  real	0m0.406s
00:19:05.956  user	0m0.173s
00:19:05.956  sys	0m0.130s
00:19:05.956  ************************************
00:19:05.956  END TEST bdev_json_nonenclosed
00:19:05.956  ************************************
00:19:05.956   11:40:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable
00:19:05.956   11:40:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:19:05.956   11:40:31 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:05.956   11:40:31 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']'
00:19:05.956   11:40:31 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable
00:19:05.956   11:40:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:19:05.956  ************************************
00:19:05.956  START TEST bdev_json_nonarray
00:19:05.956  ************************************
00:19:05.956   11:40:31 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:05.956  [2024-12-16 11:40:31.958244] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization...
00:19:05.956  [2024-12-16 11:40:31.958367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101304 ]
00:19:06.215  [2024-12-16 11:40:32.117562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:06.215  [2024-12-16 11:40:32.161272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0
00:19:06.215  [2024-12-16 11:40:32.161477] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:19:06.215  [2024-12-16 11:40:32.161513] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:06.215  [2024-12-16 11:40:32.161531] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:06.215  
00:19:06.215  real	0m0.399s
00:19:06.215  user	0m0.169s
00:19:06.215  sys	0m0.125s
00:19:06.215   11:40:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable
00:19:06.215   11:40:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:19:06.215  ************************************
00:19:06.215  END TEST bdev_json_nonarray
00:19:06.215  ************************************
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]]
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]]
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]]
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]]
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]]
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]]
00:19:06.474   11:40:32 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]]
00:19:06.474  ************************************
00:19:06.474  END TEST blockdev_raid5f
00:19:06.474  ************************************
00:19:06.474  
00:19:06.474  real	0m34.551s
00:19:06.474  user	0m47.003s
00:19:06.474  sys	0m4.420s
00:19:06.474   11:40:32 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable
00:19:06.474   11:40:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x
00:19:06.474    11:40:32  -- spdk/autotest.sh@194 -- # uname -s
00:19:06.474   11:40:32  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:19:06.474   11:40:32  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:19:06.474   11:40:32  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:19:06.474   11:40:32  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@256 -- # timing_exit lib
00:19:06.474   11:40:32  -- common/autotest_common.sh@730 -- # xtrace_disable
00:19:06.474   11:40:32  -- common/autotest_common.sh@10 -- # set +x
00:19:06.474   11:40:32  -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:19:06.474   11:40:32  -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]]
00:19:06.474   11:40:32  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:19:06.474   11:40:32  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:19:06.474   11:40:32  -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]]
00:19:06.474   11:40:32  -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT
00:19:06.474   11:40:32  -- spdk/autotest.sh@383 -- # timing_enter post_cleanup
00:19:06.474   11:40:32  -- common/autotest_common.sh@724 -- # xtrace_disable
00:19:06.474   11:40:32  -- common/autotest_common.sh@10 -- # set +x
00:19:06.474   11:40:32  -- spdk/autotest.sh@384 -- # autotest_cleanup
00:19:06.474   11:40:32  -- common/autotest_common.sh@1392 -- # local autotest_es=0
00:19:06.474   11:40:32  -- common/autotest_common.sh@1393 -- # xtrace_disable
00:19:06.474   11:40:32  -- common/autotest_common.sh@10 -- # set +x
00:19:08.378  INFO: APP EXITING
00:19:08.378  INFO: killing all VMs
00:19:08.378  INFO: killing vhost app
00:19:08.378  INFO: EXIT DONE
00:19:08.946  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:19:08.946  Waiting for block devices as requested
00:19:08.946  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:19:09.206  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:19:10.145  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:19:10.145  Cleaning
00:19:10.145  Removing:    /var/run/dpdk/spdk0/config
00:19:10.145  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:19:10.145  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:19:10.145  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:19:10.145  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:19:10.145  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:19:10.145  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:19:10.145  Removing:    /dev/shm/spdk_tgt_trace.pid69315
00:19:10.145  Removing:    /var/run/dpdk/spdk0
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid100368
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid100627
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid100661
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid100692
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid100913
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid101079
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid101160
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid101242
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid101279
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid101304
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69151
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69315
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69522
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69609
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69638
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69749
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69767
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid69950
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70029
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70114
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70214
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70300
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70334
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70365
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70441
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70547
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid70985
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71039
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71086
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71102
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71171
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71187
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71247
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71263
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71316
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71334
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71376
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71394
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71521
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71563
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid71646
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid72831
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid73026
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid73155
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid73776
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid73977
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid74110
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid74727
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid75046
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid75175
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid76527
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid76769
00:19:10.145  Removing:    /var/run/dpdk/spdk_pid76898
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid78256
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid78498
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid78627
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid79973
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid80408
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid80537
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid81980
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid82228
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid82363
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid83798
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid84046
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid84181
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid85617
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid86093
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid86222
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid86349
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid86760
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid87483
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid87861
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid88553
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid88978
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid89716
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid90114
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid92032
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid92459
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid92884
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid94919
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid95393
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid95893
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid96927
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid97244
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid98159
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid98471
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid99389
00:19:10.405  Removing:    /var/run/dpdk/spdk_pid99706
00:19:10.405  Clean
00:19:10.405   11:40:36  -- common/autotest_common.sh@1451 -- # return 0
00:19:10.405   11:40:36  -- spdk/autotest.sh@385 -- # timing_exit post_cleanup
00:19:10.405   11:40:36  -- common/autotest_common.sh@730 -- # xtrace_disable
00:19:10.405   11:40:36  -- common/autotest_common.sh@10 -- # set +x
00:19:10.664   11:40:36  -- spdk/autotest.sh@387 -- # timing_exit autotest
00:19:10.664   11:40:36  -- common/autotest_common.sh@730 -- # xtrace_disable
00:19:10.664   11:40:36  -- common/autotest_common.sh@10 -- # set +x
00:19:10.664   11:40:36  -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:19:10.664   11:40:36  -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:19:10.664   11:40:36  -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:19:10.664   11:40:36  -- spdk/autotest.sh@392 -- # [[ y == y ]]
00:19:10.664    11:40:36  -- spdk/autotest.sh@394 -- # hostname
00:19:10.664   11:40:36  -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:19:10.924  geninfo: WARNING: invalid characters removed from testname!
00:19:32.867   11:40:58  -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:19:35.403   11:41:00  -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:19:37.309   11:41:02  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:19:39.213   11:41:04  -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:19:41.118   11:41:07  -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:19:43.680   11:41:09  -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:19:45.588   11:41:11  -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:19:45.588     11:41:11  -- common/autotest_common.sh@1680 -- $ [[ y == y ]]
00:19:45.588      11:41:11  -- common/autotest_common.sh@1681 -- $ awk '{print $NF}'
00:19:45.588      11:41:11  -- common/autotest_common.sh@1681 -- $ lcov --version
00:19:45.588     11:41:11  -- common/autotest_common.sh@1681 -- $ lt 1.15 2
00:19:45.588     11:41:11  -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2
00:19:45.588     11:41:11  -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:19:45.588     11:41:11  -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:19:45.588     11:41:11  -- scripts/common.sh@336 -- $ IFS=.-:
00:19:45.588     11:41:11  -- scripts/common.sh@336 -- $ read -ra ver1
00:19:45.588     11:41:11  -- scripts/common.sh@337 -- $ IFS=.-:
00:19:45.588     11:41:11  -- scripts/common.sh@337 -- $ read -ra ver2
00:19:45.588     11:41:11  -- scripts/common.sh@338 -- $ local 'op=<'
00:19:45.588     11:41:11  -- scripts/common.sh@340 -- $ ver1_l=2
00:19:45.588     11:41:11  -- scripts/common.sh@341 -- $ ver2_l=1
00:19:45.588     11:41:11  -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:19:45.588     11:41:11  -- scripts/common.sh@344 -- $ case "$op" in
00:19:45.588     11:41:11  -- scripts/common.sh@345 -- $ : 1
00:19:45.588     11:41:11  -- scripts/common.sh@364 -- $ (( v = 0 ))
00:19:45.588     11:41:11  -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:45.588      11:41:11  -- scripts/common.sh@365 -- $ decimal 1
00:19:45.588      11:41:11  -- scripts/common.sh@353 -- $ local d=1
00:19:45.588      11:41:11  -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]]
00:19:45.588      11:41:11  -- scripts/common.sh@355 -- $ echo 1
00:19:45.588     11:41:11  -- scripts/common.sh@365 -- $ ver1[v]=1
00:19:45.588      11:41:11  -- scripts/common.sh@366 -- $ decimal 2
00:19:45.588      11:41:11  -- scripts/common.sh@353 -- $ local d=2
00:19:45.588      11:41:11  -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]]
00:19:45.588      11:41:11  -- scripts/common.sh@355 -- $ echo 2
00:19:45.588     11:41:11  -- scripts/common.sh@366 -- $ ver2[v]=2
00:19:45.588     11:41:11  -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:19:45.588     11:41:11  -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:19:45.588     11:41:11  -- scripts/common.sh@368 -- $ return 0
00:19:45.588     11:41:11  -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:45.588     11:41:11  -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS=
00:19:45.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:45.588  		--rc genhtml_branch_coverage=1
00:19:45.588  		--rc genhtml_function_coverage=1
00:19:45.588  		--rc genhtml_legend=1
00:19:45.588  		--rc geninfo_all_blocks=1
00:19:45.588  		--rc geninfo_unexecuted_blocks=1
00:19:45.588  		
00:19:45.588  		'
00:19:45.588     11:41:11  -- common/autotest_common.sh@1694 -- $ LCOV_OPTS='
00:19:45.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:45.588  		--rc genhtml_branch_coverage=1
00:19:45.588  		--rc genhtml_function_coverage=1
00:19:45.588  		--rc genhtml_legend=1
00:19:45.588  		--rc geninfo_all_blocks=1
00:19:45.588  		--rc geninfo_unexecuted_blocks=1
00:19:45.588  		
00:19:45.588  		'
00:19:45.588     11:41:11  -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 
00:19:45.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:45.588  		--rc genhtml_branch_coverage=1
00:19:45.588  		--rc genhtml_function_coverage=1
00:19:45.588  		--rc genhtml_legend=1
00:19:45.588  		--rc geninfo_all_blocks=1
00:19:45.588  		--rc geninfo_unexecuted_blocks=1
00:19:45.588  		
00:19:45.588  		'
00:19:45.588     11:41:11  -- common/autotest_common.sh@1695 -- $ LCOV='lcov 
00:19:45.588  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:45.588  		--rc genhtml_branch_coverage=1
00:19:45.588  		--rc genhtml_function_coverage=1
00:19:45.588  		--rc genhtml_legend=1
00:19:45.588  		--rc geninfo_all_blocks=1
00:19:45.588  		--rc geninfo_unexecuted_blocks=1
00:19:45.588  		
00:19:45.588  		'
00:19:45.588    11:41:11  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:19:45.588     11:41:11  -- scripts/common.sh@15 -- $ shopt -s extglob
00:19:45.588     11:41:11  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:19:45.588     11:41:11  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:45.588     11:41:11  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:45.588      11:41:11  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:45.588      11:41:11  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:45.588      11:41:11  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:45.588      11:41:11  -- paths/export.sh@5 -- $ export PATH
00:19:45.588      11:41:11  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:45.588    11:41:11  -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:19:45.588      11:41:11  -- common/autobuild_common.sh@479 -- $ date +%s
00:19:45.588     11:41:11  -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734349271.XXXXXX
00:19:45.588    11:41:11  -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734349271.xWlOVn
00:19:45.588    11:41:11  -- common/autobuild_common.sh@481 -- $ [[ -n '' ]]
00:19:45.588    11:41:11  -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']'
00:19:45.588     11:41:11  -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:19:45.588    11:41:11  -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk'
00:19:45.588    11:41:11  -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:19:45.588    11:41:11  -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp  --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:19:45.588     11:41:11  -- common/autobuild_common.sh@495 -- $ get_config_params
00:19:45.588     11:41:11  -- common/autotest_common.sh@407 -- $ xtrace_disable
00:19:45.588     11:41:11  -- common/autotest_common.sh@10 -- $ set +x
00:19:45.588    11:41:11  -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build'
00:19:45.588    11:41:11  -- common/autobuild_common.sh@497 -- $ start_monitor_resources
00:19:45.588    11:41:11  -- pm/common@17 -- $ local monitor
00:19:45.588    11:41:11  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:19:45.588    11:41:11  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:19:45.588    11:41:11  -- pm/common@25 -- $ sleep 1
00:19:45.588     11:41:11  -- pm/common@21 -- $ date +%s
00:19:45.588     11:41:11  -- pm/common@21 -- $ date +%s
00:19:45.588    11:41:11  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734349271
00:19:45.588    11:41:11  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734349271
00:19:45.588  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734349271_collect-vmstat.pm.log
00:19:45.588  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734349271_collect-cpu-load.pm.log
00:19:46.526    11:41:12  -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT
00:19:46.526   11:41:12  -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]]
00:19:46.526   11:41:12  -- spdk/autopackage.sh@14 -- $ timing_finish
00:19:46.526   11:41:12  -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:19:46.526   11:41:12  -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:19:46.527   11:41:12  -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:19:46.785   11:41:12  -- spdk/autopackage.sh@1 -- $ stop_monitor_resources
00:19:46.785   11:41:12  -- pm/common@29 -- $ signal_monitor_resources TERM
00:19:46.785   11:41:12  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:19:46.785   11:41:12  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:19:46.785   11:41:12  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:19:46.785   11:41:12  -- pm/common@44 -- $ pid=102803
00:19:46.785   11:41:12  -- pm/common@50 -- $ kill -TERM 102803
00:19:46.785   11:41:12  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:19:46.785   11:41:12  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:19:46.785   11:41:12  -- pm/common@44 -- $ pid=102804
00:19:46.785   11:41:12  -- pm/common@50 -- $ kill -TERM 102804
00:19:46.785  + [[ -n 6167 ]]
00:19:46.785  + sudo kill 6167
00:19:46.794  [Pipeline] }
00:19:46.809  [Pipeline] // timeout
00:19:46.814  [Pipeline] }
00:19:46.827  [Pipeline] // stage
00:19:46.834  [Pipeline] }
00:19:46.847  [Pipeline] // catchError
00:19:46.855  [Pipeline] stage
00:19:46.857  [Pipeline] { (Stop VM)
00:19:46.868  [Pipeline] sh
00:19:47.149  + vagrant halt
00:19:49.692  ==> default: Halting domain...
00:19:56.277  [Pipeline] sh
00:19:56.561  + vagrant destroy -f
00:19:59.095  ==> default: Removing domain...
00:19:59.106  [Pipeline] sh
00:19:59.383  + mv output /var/jenkins/workspace/raid-vg-autotest_2/output
00:19:59.391  [Pipeline] }
00:19:59.405  [Pipeline] // stage
00:19:59.410  [Pipeline] }
00:19:59.424  [Pipeline] // dir
00:19:59.429  [Pipeline] }
00:19:59.443  [Pipeline] // wrap
00:19:59.449  [Pipeline] }
00:19:59.463  [Pipeline] // catchError
00:19:59.474  [Pipeline] stage
00:19:59.476  [Pipeline] { (Epilogue)
00:19:59.489  [Pipeline] sh
00:19:59.769  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:20:05.069  [Pipeline] catchError
00:20:05.071  [Pipeline] {
00:20:05.084  [Pipeline] sh
00:20:05.368  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:20:05.368  Artifacts sizes are good
00:20:05.377  [Pipeline] }
00:20:05.391  [Pipeline] // catchError
00:20:05.402  [Pipeline] archiveArtifacts
00:20:05.409  Archiving artifacts
00:20:05.517  [Pipeline] cleanWs
00:20:05.529  [WS-CLEANUP] Deleting project workspace...
00:20:05.529  [WS-CLEANUP] Deferred wipeout is used...
00:20:05.535  [WS-CLEANUP] done
00:20:05.537  [Pipeline] }
00:20:05.552  [Pipeline] // stage
00:20:05.558  [Pipeline] }
00:20:05.571  [Pipeline] // node
00:20:05.577  [Pipeline] End of Pipeline
00:20:05.643  Finished: SUCCESS